3,164
Views
3
CrossRef citations to date
0
Altmetric
Original Articles

Understanding Students’ Views of the Crit Assessment

Pages 44-67 | Published online: 15 Dec 2015

Abstract

The crit is the most common feedback and assessment method used in architecture and many other art and design programmes. Whilst considerable research has been conducted on the crit, little attention has been paid to students’ perceptions of the process or to understanding how and what they learn from it. This is truly a missed opportunity, given that it has been identified as the greatest source of student dissatisfaction. The aim of this research project was to understand students’ opinions of the primary method used to assess them and to identify ways in which the process could be adapted to maximise their learning. Focus groups were held with students from each level of the undergraduate architecture course at Liverpool John Moores University.

Despite recognising some positive qualities of the crit, focus group participants associated many more negative issues with it. These issues are discussed in detail, with reference to contemporary pedagogic theory and best practice in assessment and feedback, to the extent that the crit’s fitness for purpose is questioned. Alternative formats to the traditional crit are then evaluated within the framework of existing research and in conclusion, a radical re-evaluation of the traditional crit is proposed with the recommendation that alternative methods are adopted alongside (if not in place of) it.

Introduction

CitationRowntree (1977, p.1) observes that, ‘If we wish to discover the truth about an education system, we must look into its assessment procedures’. It is inevitable that increased pressure on universities to improve their performance in league tables filters down to tutors who must strive to enhance the teaching and learning experience. With assessment and feedback forming a key component of student satisfaction surveys, the need to address these topics has rarely been more pertinent. Whatever the motivation for focussing greater attention on the quality of assessment and richness of feedback, the end result should still be an improvement in approach — driven through an understanding of the students’ own opinions of their assessment and feedback.

Critique by jury — commonly known as a crit, jury or design review — is the principal method of feedback and assessment for design modules in architectural education (CitationAnthony, 1991; Parnell et al., 2007). Typically, students present drawings and models of their response to a project brief in front of a small panel of tutors and an audience of their peers and describe the ideas underpinning their work. Tutors provide verbal feedback on the design and make suggestions for ways in which it could be further resolved and enhanced (CitationParnell et al., 2007, p.5). Crits occur throughout the project and at its end is ‘the ceremonial culmination of each studio design project’ (CitationLewis, 1998, p.77).

At Liverpool John Moores University (LJMU) the format of crits is essentially the same at each level of the course, including at the post-graduate stage. Each cohort is divided into groups of 15 to 20 students, and each group is critiqued by a panel of at least two tutors, occasionally more. Sometimes guest critics or postgraduate students will be invited to participate, but this is sporadic and unstructured. They take place in the design studio over the course of one day, sometimes overrunning into the evening; the duration of each individual session varies but typically lasts around twenty minutes. Each student pins their drawings on the wall and places models beside them. They then deliver a brief verbal overview of the work and the ideas that underpin it, after which tutors ask questions and deliver feedback. The tutors all have a similar role in the crit — to provide verbal feedback — they are not, as one might imagine, charged with reviewing different aspects of the project. The student usually stands beside their work with the tutors facing them on the front row of an informal semi-circle of seats, the audience of peers beyond. A different peer is allocated to record the tutors’ commentary for each student. Students are critiqued in turn, and those not being reviewed should be present for each other’s crit, however this is not enforced and students often move between the different panels (and the café) during the course of the day. The level of involvement of the student audience in each crit varies but typically unless the tutors make a concerted arrangement for students to participate in the process, they will passively observe from behind the tutors. In part this is due to the physical layout as with tutors sitting in front of the work — their backs to observing students — it is difficult for peers to see the work being discussed and thus to engage in any dialogue. A substantial proportion of research in the literature review (CitationStuart-Murray, 2010; Parnell et al., 2007; Ilozor, 2006, p.53; CitationKoch et al., 2002; Nicol and Pilling, 2000; Anthony, 1991, p.158) strongly suggests that this traditional format of crits is similar to many other architecture programmes both nationally and internationally.

The crit is used for both formative and summative assessment. The timing of interim crits depends on the duration of the project; for example, they might occur at three to four week intervals during a semester-long project. At the final crit a mark is given to each project which is moderated at portfolio review (a closed session at which the semester’s work of each student is reviewed by all design tutors for that cohort) to ensure parity between the different panels critiquing each of the student groups.

The crit has been the subject of considerable research (CitationStuart-Murray, 2010; Blythman et al., 2007; Parnell et al., 2007; Webster, 2007; Ilozor, 2006; Sara and Parnell 2004; Parnell 2003), generally concluding that it is an integral element of architectural education with substantial learning potential, yet often creates an adversarial environment, has confused objectives and its capacity for providing meaningful feedback can vary considerably. Furthermore, while its explicit objectives are to provide formative feedback and assessment, the traditional crit is much more complex than may be supposed and has other — potentially conflicting — purposes. These include: a celebration of the completion of a project or stage in the process; an opportunity for the student presenting to perform; an appraisal of the quality of visual media utilised; an appraisal of the processes the student has adopted in designing the project; and an appraisal of the design object itself — the product. If student learning is directed by what is being assessed — a widely accepted concept — then addressing such a multiplicity of often implied or unspoken objectives is likely to be a confusing and daunting prospect. Is the crit trying to do too much? Equally, a student who has fulfilled the requirements of the project brief might expect a constructive crit, only to find that the discussion is around the extra intellectual and creative thought beyond the minimum requirements, which is only highlighted in the assessment criteria (if at all). If the crit is more than a mechanism for providing feedback and assessment against explicit objectives then it must inevitably be compromised in this regard. Interestingly, it is rarely used to evaluate teaching.

Little research has focussed on students’ opinions of the traditional crit. That which does (CitationWhite, 2000; Wilkin, 2000; Anthony, 1991) again identifies an adversarial environment that lacks peer engagement and provides inconsistent feedback, and although it recognises the crit’s potential for learning, it questions its effectiveness in practice. Such research does not explore students’ perceived level of learning in any depth nor does it examine the quality of feedback or format of crits in any detail. If, as Prosser and Trigwell suggest, ‘… to improve the quality of students’ approaches to learning and their learning outcome, university teachers first need to determine students’ perceptions of their assessment’ (2001, p.4), then a crucial element of understanding is absent.

Assessment undoubtedly affects the quality of students’ learning, but it is not the method itself that affects them — rather their experience of it (CitationRamsden, 2003, p.184). Prosser and Trigwell maintain that, ‘… within the same class there is substantial variation in the way students perceive … the nature of assessment’ (2001, p.81). Students should value assessment as an integral element of their learning experience. It is probable that the crit will continue as the principal if not exclusive method of delivering feedback and conducting assessment in art and design courses, therefore it is imperative that it is conducted in a manner that maximises its contribution to the diverse nature of student learning.

The aim of the research project was, therefore, to identify the range of perceptions of the crit as a method of providing formative feedback and of assessment across the student body. With the use of the crit commonplace for other art and design subjects, such an understanding would have widespread relevance.

Research Methodology

Litosseliti states that focus groups are appropriate for, ‘Gaining information on participants’ views, attitudes … and perceptions on a topic’ (2003, p.18). For a project seeking students’ opinions, focus groups were considered a particularly appropriate research method. They can also generate ideas (CitationKrueger and Casey, 2000, p.3) — a useful attribute for considering ways to evolve the crit. Whilst there are weaknesses to focus groups, such as perceptions being created within the group itself (CitationLitosseliti, 2003, p.21) and analysis misinterpreting emphasis (CitationFlemming, 1986, p.551; CitationSvensson and Theman, 1983, p.13), on balance they were considered to be the most appropriate method. They were selected in preference over questionnaires and one-to-one interviews for their potential to stimulate debate.

The focus group process

In order to obtain a cross section of opinion a focus group was held for each of the three levels of the undergraduate course. All students from each cohort were invited to the session by email and participants were selected on a ‘first-come first-served’ basis as other selection processes presented problems, for example pre-selecting students to participate could be perceived as engineering a particular dynamic. Random selection (pre invite) was also considered but it was thought that students attending of their own volition were more likely to express their views freely.

The focus group (and cohort) sizes were: nine level one (100), 11 level two (58) and 10 level three (64) students. The gender ratio of the groups (60% male and 40% female) was marginally nearer equal than actually exists within the respective cohorts where the proportion of male students is greater. Each group included a balanced mix of abilities. Whilst this was fortuitous rather than premeditated, it ensures that the data set was not weighted towards the views of stronger or weaker students, who are likely to respond differently.

CitationMerton et al. (1990) found that people revealed sensitive information when they felt they were in a comfortable environment. Affirming the confidentiality of the process reduced reticence to contribute and increased the depth of responses. Each focus group was issued with Participant Information Sheets confirming: the purpose of the study, that participation was voluntary and they could leave at any time, what participants were required to do, that both the participants’ identities and their contributions were confidential and finally, what the results of the study would be used for.

As advocated by CitationKrueger and Casey (2000, p.43) each session began with open-ended questions to prompt debate, such as ‘What do you feel you learn from crits?’ and then progressed to increasingly specific issues that might reveal particular values associated with the crit, such as ‘What skills do you think that crits give you that will be useful when you are practising architects?’ or ‘Are there alternative methods of assessing your design work you would prefer?’. Each session lasted between one and one-and-a-half hours.

Analysis of focus group dialogue

Each discussion was recorded and transcribed. Following the methodology proposed in CitationLitosseliti (2003, p.87), after an in-depth reading of each transcript common themes arising within the dialogue were identified. This enabled related comments that might have occurred at disparate times in the session to be clustered and a set of key themes soon emerged including: awareness of assessment criteria, tutors’ conduct during crits and the value of crits as a learning experience.

Each comment within the three transcripts was coded to identify which theme it related to. Next, the text in each of the transcripts was given a different colour and each of the participants’ comments was pasted under the appropriate theme heading. Now, all of the comments relating to each particular theme could be read together, whilst at the same time identifying (via its colour) which level a particular response related to (CitationKrueger and Casey 2000, p.133). As one aim of the project was to identify whether or not perceptions of the crit are materially different between the three levels it was important to ensure that similarities or differences in responses to the same theme but between the three groups could be identified. Perhaps surprisingly there was a high degree of consistency in responses from all three levels. Whilst differences in emphasis were identified, the majority of comments made by the group for one level were echoed in the others. This means that any developments in the way crits are conducted could be implemented unilaterally.

Findings and Outcomes

Some positive attributes of the crit were identified. For example, feedback is direct and can be readily absorbed into the students’ projects.

“I find that when I go to the crit you get really good input. It’s not so much a problem with the actual crit or what the crit is, it’s so much more helpful than most of our tutorials.”

(Level one)

Although many found presenting their work scary and confrontational, participants valued the opportunity to present and discuss their work so in this respect it enables tutors to better understand students and their learning. Even level one students, for whom crits are a novel process, saw value in the discussion that takes place during crits, and in becoming versed in defending their work:

“It gives me the opportunity to say, ‘Why?’ and ‘How does that work?’”

(Level one)

“I think that the way that we are reviewed now is good because it gives you a chance to defend your ideas.”

(Level one)

However positive comments were in a significant minority and there is evidence of much room for improvement. For example, a level three student observed:

“Only occasionally do I see the learning from it as I know which tutors I’ll get and they’ll give me something and I’ll get good feedback and comments…. You really shouldn’t be standing there in that kind of position, thinking that this is the time I’m going to get ripped apart; it should be a time you think you are going to learn and get something back from it.”

At the heart of every good relationship lies good communication

Perhaps most disconcerting was the apparent lack of understanding of many aspects of the assessment process, by both participants and their tutors. Numerous participants (even in the level three group) commented that they do not know or understand what they get assessed on and suggested that they should be given an understanding of the marking system, or that structured criteria be used during assessments to ensure that all tutors focus on the same issues.

“If we could see what [the assessment criteria] were, because I don’t think that’s cheating.”

(Level three)

“I still don’t know what we get assessed on.”

(Level three)

“Give us a loose format of the marking system at the start of the project, just so that you could get a feel what they will be being assessed against.”

(Level three)

In the level two and three groups, participants said they found it difficult to understand why some of their peers performed very well whereas others did not; they do not see a correlation between the assessment process and the outcomes.

Furthermore, participants felt there to be two-fold inconsistency between tutors’ expectations of what should be presented at each crit. Firstly, students found themselves criticised by one tutor for following a particular path having been guided down that path by a different tutor at an earlier crit or tutorial. Secondly, students felt that some tutors were unaware of what the brief, learning objectives or assessment criteria were for the crit they were participating in.

“There is a lack of communication between reviewers.”

(Level two)

“They don’t stick to the brief… All of the tutors don’t read the brief — they just decide what they want by themselves. They all want a different thing.”

(Level three)

The level three group attributed this lack of clarity to the project brief being too vague, the tutors not adhering to the brief during crits — or not reading the brief — or deciding themselves what to provide feedback on. These inconsistencies are compounded when the cohort is split into several crit panels and tutors are assessing in parallel but this is unavoidable when cohorts of up to 100 students are being critiqued.

Students routinely present their work to different tutors at subsequent crits — the rationale being that each student is exposed to a range of views. However, the issue of continuity of feedback — at successive crits, between different panels critiquing in parallel, and even between different tutors in the same crit panel — remains and although it featured throughout all levels, it generated the most dialogue in the level three group.

“What I find difficult is when you get different tutors’ opinions, and one of them agrees with what you have done and the other one doesn’t, and you are left thinking, ‘What do I do? Do I change it or do I leave it?’ That’s what is hard to deal with.”

(Level one)

“When you bounce around from tutor to tutor at each crit you end up with such a mixture; they give you direction but every single person’s direction is completely different, and you end up pin-balling around.”

(Level two)

“You’ve got tutors with opposite personalities who’ll say completely different things and you’re left there thinking, ‘What did I get out of that?’ and you’re confused what to do next.”

(Level three)

Although in contrast:

“It’s good to get feedback from other tutors as well.”

(Level one)

“It does help having a range of tutors.”

(Level three)

It might be expected that confusion over conflicting advice is more relevant to level one students than level three, by which time students might be expected to be able to discern between conflicting feedback. As a level one student observed:

“I think it’s more of an issue just in first year, because you don’t know enough to take on board different perspectives.”

When the involvement of external critics from local architectural practices and post-graduate students was discussed, further contradictions emerged. In stark contradiction with Ilozor’s research (2006, p.55) participants who had experienced panels that included external critics and post-graduates unanimously viewed this as a positive experience. Students’ unfamiliarity with external critics was not a detrimental issue and not one student identified it as problematic or negative, rather the contrary with students embracing the diversity of viewpoints provided. This presents an interesting conundrum: why do students value diverse feedback when it is delivered from external critics but not the range of feedback provided by different tutors?

Comments from the focus groups suggest that students value the feedback from external critics as it is perceived as being from the ‘real world’ and is contextualised in the wider realm of professional practice.

“I got a lot out of it as I know that what comes out of it is related to things that he does.”

(Level three)

Also, several participants considered that post-graduate students had more empathy than tutors.

“I love it when we get previous students in, like when the fifth years came in and assisted, they were great. I know and they know that they have been in the same boat that I’m in right now, and they understand.”

(Level two)

The groups debated the merits of tutors changing between subsequent crit panels and the benefit of the students’ design tutor being included within the panel. Whilst positive for some, changing tutors between each crit led to confusion for others because as well as receiving contradictory feedback, they had to re-cap work that had been critiqued previously, consequently missing the opportunity to garner feedback on new work.

“Although my most recent crit didn’t go that well I felt that I spent seventy-five percent of the time explaining something that I had explained three times before. It wears a bit thin.”

(Level two)

In addition to being given a clearer understanding of the assessment process and criteria, participants also suggested that being shown examples of previous work would be beneficial to them.

“Publishing examples of previous years’ projects would enable you to see what standard is expected. It needn’t be the whole project, but an example of the range and sophistication of drawings and models.”

(Level three)

Seconds out, round two…

Unlike most assessment methods the crit involves direct interaction between student and tutor. Participants felt that tutors do not respect their opinions and that the process is unnecessarily adversarial. They believe that the quality of their crit (and therefore of the feedback they receive) is often dependent on individual tutors’ preferences, personality and even the vagaries of their mood:

“It can be affected by the approach of a given tutor on that day and even by the mood of the tutor — if [name] is in a mood then everyone is crap. Or if [name] is in an arty mood then it’s not helpful all of the time.”

(Level three)

“It’s like when you get [name] everyone is like, ‘Oh shit’ because you know that it’s going to end up in an argument.”

(Level two)

Numerous participants associate crits with negativity and find they are frequently cut-off part way through explaining their ideas — when a tutor feels compelled to interject — and are thus unable to complete their résumé of the project.

“I’d try and get rid of the whole feeling of your being in front of a firing squad… you think if you get the smallest thing wrong he or she starts barking at me. It just feels demoralising.”

(Level one)

“But if they make you feel so low that you don’t go away thinking, ‘I can do this’, you go away thinking, ‘I’m so crap at architecture I can’t do it.’”

(Level three)

Students rightly feel that a crit should be a constructive part of their learning rather than the demoralising experience several described it as. They associate it with feeling scared, and that the whole process is one of identifying flaws as opposed to constructive learning. Negative criticism can also lead to confusion, when having been told what is wrong with their work participants were left unsure as how to remedy and progress it. Participants in all levels considered that negative criticism also led to a lack of confidence whereas positive criticism relates not only to value in learning — it also directly affects morale.

“It really does help when they’ve given that bit of positive at the beginning. Even if it’s just the smallest bit, it really does make you want to do it better.”

(Level three)

When asked what single thing they would change about crits a level three student responded:

“Improve the way in which feedback is given.”

The participants’ approach to a crit is affected by how confident they feel. Perhaps if the atmosphere of a crit was perceived as being more positive and constructive students would approach it with a more confident mindset and consequently be more likely to learn.

Recording feedback

Once the student being critiqued has given a brief synopsis of their work, typically explaining the ideas that underpin what is represented in their drawings and models, tutors deliver feedback verbally. This is transcribed as a set of hand-written notes by a peer in the audience, which are given to the student at the end of their crit. As it is difficult for the student being reviewed to listen and remember the barrage of feedback that might be delivered over twenty minutes, the idea is that the notes form a record of the feedback which can be referred to later and discussed at their next tutorial. Participants are frustrated that tutors’ feedback is recorded by their peers, mostly because it is too dependent on the individual taking notes.

“Need a better way of recording the comments than the feedback sheets and people dictating what tutors are saying. At present it is too dependent on the person taking notes; in a very negative crit where the student has to be very defensive he or she frequently can’t remember what criticism has been made.”

(Level three)

“Those feedback sheets are the worst thing in the world. Half the time I can’t read the person’s handwriting, and then the rest of it they haven’t put it in a clear way, they have just written odd words in.”

(Level two)

This was particularly an issue for level one participants due to the inexperience of peers at this early stage. They would prefer tutors to complete feedback sheets to ensure accuracy and prioritisation of primary issues.

“I think it would be better if the tutors were actually to give us something as well, because when the tutor has said stuff and the student taking notes might not understand what the tutor is saying.”

(Level one)

Does presenting provide learning in itself?

There was some difference of opinion as to whether the process of presenting work for a crit was refining a skill that would be of value to students in their future role as a practising architect.

“At the end of the day you are going to be talking to clients, and we may as well have seven years of getting really good at talking to people, that’s definitely the best way to do it, I think.”

(Level one)

“I’ve showed clients work as I’ve been going along at stages, but I’ve never pinned it up on a wall. You sit and you talk through it, it’s more informal; it’s not like a big, formal pin-up. When a client comes in you just sit and talk about it. You don’t pin it up on the wall and have a crit sheet; you just don’t do things like that in my experience.”

(Level three)

Interestingly, the level one student believes that crits will prepare them for professional practice, whereas the level three student (with experience of client presentations) does not consider them as particularly relevant.

Interim and final crits

Interim crits — which occur during the development of a project — focus on formative, developmental feedback whilst at the final crit, the student’s project is also assessed. Level one students do not always experience interim crits due to time constraints, however levels two and three were unanimous in preferring them over summative crits. They considered interims to be more beneficial to their learning, less stressful, and more encouraging of dialogue with tutors.

“At the interim crits you feel like you can talk to them [the tutors] more. It’s like your stress is less and you’re like, ‘What if this…?’ It is better.”

(Level two)

Without the concern of being graded, participants view interim crits as having greater benefits to their learning and felt more likely to discuss their project, rather than defend it. Indeed some participants consider the final, summative assessment as a fait accompli, an imposition in order to achieve a grade. CitationAnthony (1991, p.35) supports the students’ perception that interim crits are a much more effective learning experience than the final. Just as interim feedback should contribute to evolving that project, feedback delivered at a final crit needs to feed forward into subsequent projects and not relate only to what has been completed. CitationBlythman et al. (2007, p.2) echo this observation, and suggest that there is an argument for increasing formative crits, where feedback is given at an interim stage.

Discussion and Implications

Lack of consistency in crits takes several forms, such as tutors giving conflicting feedback, having different expectations, and contradicting themselves at subsequent crits. Arguably this issue goes to the heart of assessment in creative disciplines. Architecture is a pluralistic and subjective programme and therefore it is to be expected that tutors do not always agree on the most appropriate response. All three groups recognised this — even after just their first semester level one students understood that there is no right or wrong solution — and that they should interpret between different opinions. Furthermore it could be argued that adjusting to different points of view is part of the transition from secondary to higher education and independent learning. However all three groups found it equally confusing, leaving them unsure as to how to progress their work irrespective of their level, which highlights that such discernment does not develop through experience. Students do not receive guidance in weighing up such differences of opinion, which is clearly an important omission. CitationBlythman et al. (2007, p.2) suggest that students seeing tutors having contradictory positions and disagreements in crits is important as it demonstrates there is more than one solution to a given brief. However, if the primary purpose of the crit is to provide feedback that contributes to learning students should not be left confused by such differences of opinion and should finish the session with clear strategies to progress their work. Feedback must feed forward.

Participants felt that tutors’ expectations of what should be presented at crits lack parity. When there are several panels critiquing simultaneously it is unreasonable if they respond to similar work differently because of disparate expectations. Students also considered that tutors do not know — or choose to ignore — the requirements of the project brief. These views are supported by both personal experience and other research, including CitationAnthony (1991, p.30), CitationIlozor (2006, p.53) and CitationWilkin (2000, p.103). This creates a hidden agenda, where students are uncertain as to what they will be critiqued against and could be likened to Snyder’s ‘hidden curriculum’ (1973) as the students feel they are being appraised against issues or subject matter that their critics favour over what they have been taught in design studio (CitationAnthony 1991, p.12).

The majority of participants lacked understanding of the assessment process and were unaware of assessment criteria. Whilst the learning outcomes and work required for each module are listed in widely available course documentation, assessment criteria are only stated on the assessment record sheet. Frequently, the first time a student had seen the record sheet was when it was returned to them with written feedback at the end of a module. Consequently, the student was only aware of the criteria against which they were being assessed after the event. Not only is dissemination of criteria basic good practice (CitationRamsden 2003, p.182), a specific requirement of this institution’s Principles of Assessment is to provide students with clear, explicit information on criteria and what is expected of them for individual assessment tasks (CitationLJMU, 2008).

Brown and Smith suggest that, ‘inferences drawn about a student’s assignment may vary widely from assessor to assessor, particularly if they are not using explicit criteria’ (1997, p.8). Clarifying a module’s assessment criteria might go some way to improving continuity of feedback. The module leader could issue a crit strategy to students and tutors, establishing — without being prescriptive — objectives for interim and final crits and identifying the assessment criteria for that module. Thus tutors would be reminded of the objective of each successive crit and thereby engender more parity between their expectations and the direction of the feedback they give within the overarching framework of the assessment criteria.

However the lack of understanding of assessment objectives and tutors’ expectations is unlikely to be fully resolved simply by disseminating assessment criteria. CitationO’Donovan et al., (2004, p.333) suggest that a single-minded focus on explicit articulation falls short of providing students and tutors with common and insightful knowledge of assessment standards and criteria. Rather, they suggest that meaningful understanding is best achieved through a combination of explicit and tacit references. They define tacit knowledge as that which is learned experientially or that cannot be easily articulated — both of which could readily be applied to an architectural design project. This is further reinforced when linked to the ‘connoisseur’ model of assessment, typified — they suggest — by the phrase ‘I cannot describe it, but I know a good piece of work when I see it’ (CitationO’Donovan et al., 2004, p.328 [emphasis added]), something most tutors could readily associate with crit assessments.

Tacit expression could be addressed in architectural assessment through the use of examples of project work. A recurrent request by students in both this research and in module feedback surveys is to be shown work from previous cohorts. It is insufficient to show examples of previous projects and assessment criteria in isolation; tutors would need to use the examples as a medium through which to demonstrate how a project is evaluated or put another way, how assessment criteria are applied to inform the assessment. Contextualising the criteria in this way is likely to deepen the students’ understanding of their application so that they become less of an arbitrary, abstract statement. CitationO’Donovan et al. (2004, p.332) propose using good and borderline examples for a session in which students evaluate work themselves. The process is then discussed between peers and tutor, thus creating a working knowledge of the assessment process. This might also contribute to addressing the fact that participants said they did not understand the relative performance of themselves and their peers. CitationElton (2004, p.52) suggests a way to avoid the tacit nature of subjective judgements would be to adopt a holistic assessment based on a number of qualitative standards, which he likens to the judgement of performance in figure skating. These process-based qualitative standards would become a method through which to derive assessment criteria for the module.

Retaining the same tutors from one crit to the next was another primary issue for some participants, although others recognised the opportunity to receive a range of feedback. One member of the panel remaining the same and one rotating was suggested by a level two participant. This might help to address participants’ frustration at having to repeat themselves when they are seeking feedback on new work and at receiving feedback in one direction only for it to be directly contradicted at a subsequent session. However, perhaps if students felt that different tutors would critique them on a more consistent basis and have similar expectations, they might be less concerned with the constancy of a given individual.

A call for constructive feedback

The research strongly suggests that participants associate crits with demoralising negativity, often devoid of constructive criticism and positive feedback. This was one of the most frequent responses when participants were asked what they would change about crits. Constructive feedback was considered important in feeding forward, which is necessary for feedback to be of maximum benefit to learning; level one students recognised that if they are told something works they will take those positive qualities toward their learning for the next project.

It is possible that students perceive what was intended as a constructive comment negatively, particularly if they feel in a position where they are defending their work. Can the crit be made less confrontational? Can a tutor ensure that feedback is received with the intent that it was delivered? Perhaps tutors should be mindful to ensure that when making critical comments, as invariably the process demands, they identify why they feel that way, and suggest alternatives. In other words, perhaps it is not so much what is said, but the way in which it is delivered.

“When [name] says things he doesn’t attack you, he says, ‘You could do this’; you feel really confident in what he is saying because he’s not attacking your design and taking it apart.”

(Level two)

Participants recognised that tutors need to be critical, but thought that they should not dwell on the negative and should strive to include positive, constructive feedback in order to maximise their learning. Is it so difficult to make at least one positive comment in each crit? As Ramsden says generally of assessment, ‘Negative comments need to be carefully balanced by positive ones; great delicacy is needed if critical feedback is to have the effect of helping students, especially inexperienced ones, to learn something rather than to become defensive or disheartened’ (2003, p.188).

The format of the crit has strength in providing feedback instantaneously and facilitating dialogue but perhaps the very fact that feedback is verbal also means it can become overtly negative — sarcastic even. Would tutors put into writing all that they say? Even the term ‘crit’ itself has connotations that the process is overtly critical. The use of the term ‘design review’ or ‘design evaluation’ might lessen this.

CitationWilkin (2000, p.103) suggests that personal anxiety in crits is in much less evidence for level three architecture students than level one. The data set contradicted this proposition, with students in all levels associating fear and nervousness with the process. Level three participants highlighted both tutors themselves and the environment in which crits took place as contributory factors.

Other ways to record and disseminate feedback

Recording feedback was an issue raised by all levels, primarily as when transcribed by a peer the quality was highly variable and additionally at level one, participants were concerned that the peer might not understand what has been said, or misspell it, potentially rendering that piece of feedback inaccurate, useless even. Using feedback sheets as a record of dialogue is also somewhat open to misinterpretation; for example subtleties within the original conversation — such as emphasis and tone — are not recorded (CitationSvensson and Theman 1983, p.13). Consequently, students reading their feedback sheet after the event may not be able to discern any hierarchy of importance in the feedback.

Whilst it is demanding for tutors to simultaneously deliver verbal and written feedback, much could be clarified through a collection of sketches and select feedback on the principal areas for development. Alternatively, pod casts are increasingly being used as a medium to deliver feedback (CitationRoberts, 2008). Each crit could be digitally recorded and disseminated so that students could have a full record of the dialogue and be more likely to identify key issues often lost in transcription, as they would be clear in the tutors’ intonation.

Is the traditional crit fit for purpose?

Can the traditional crit, with its roots in the nineteenth century Ecole des Beaux Arts (CitationAnthony 1991, p.9), evolve to address its negative qualities and embrace contemporary pedagogic theory and best practice? This is a crucially important question for, if not, the future of this cornerstone of assessment in architectural education must be questioned. CitationWhite’s (2000) research suggested that students do not view the traditional tutor-led crit as a learning opportunity, rather as a forum to judge tutors’ reaction to their project. This would suggest a fundamental dichotomy between the intended role of the crit and how students perceive it. However, the data set does not generally support White’s view, with students commenting that they learn both from each other and from tutors’ comments but they do observe that such learning can be sporadic and unstructured. Furthermore, do students become so focussed on the event itself that they miss out on its full learning potential?

Central to the debate is the struggle for power between tutor and student which arises from the relative positions of authority between presenter and critic (CitationSara and Parnell, 2004, p.2). CitationStuart-Murray (2010, p.8) highlights the detrimental impact of this asymmetric power relationship in crit assessments as resulting in confusion and ‘tribalism’. Both CitationAnthony (1991, p.118) and CitationIlozor (2006, p.59) argue that the crit should be democratised with critics no longer occupying the position of dictators over powerless students. When students in a subjective discipline are presenting their work without a clear understanding of what is required to tutors with differing expectations — the hidden curriculum — it is highly likely that this sense of disempowerment will be exacerbated. CitationVickerman (2009, p.222) suggest that peer reviews empower students in their learning and feedback. Such alternative formats — discussed below — might help to redress the balance of power in crits.

The design project is a tremendous opportunity for learning, encouraging students to work at higher order levels of cognition. However the crit in its traditional format is certainly not a means through which it is possible to evaluate whether or not this more complex thinking is evident and as such it cannot necessarily encourage them in the student. As research by Stuart-Murray highlights (2010, p.7), the crit is structured around a student describing their work — a lower order (multi-structural) level of activity (CitationBiggs, 2003, p.48). If assessment motivates the direction of student learning (CitationElton 1988, p.220), that the traditional crit does not explicitly promote students to demonstrate higher order levels of engagement — nor necessarily evaluate them — is a disconcerting shortcoming. Nor does the crit necessarily facilitate the experiential cycle of learning (CitationKolb 1984) in which feedback is aimed at addressing development in subsequent projects. Furthermore, as the crit does not facilitate a collective evaluation of the different levels of students’ understanding, consequently it cannot address CitationRamsden’s (2003, p.202) proposal that assessment should enable tutors to identify shortcomings and evolve the curriculum and teaching in response to the same.

Alternative reviews, such as peer crits or CitationStuart-Murray’s (2010) process-map crit (focussing on the design process not the product) and metaphor crit (focussing on the use of metaphor within the project) may be much more successful in evaluating where students have engaged with higher-order levels of cognition. In both instances the objectives are more targeted and explicit than in the traditional crit where students describe their thinking throughout the whole design project up to the day of the review. Objectives for the crit could be set so that students explicitly refer to interpretation of theory or precedent in the context of their project to thereby demonstrate higher order levels of cognition such as analysing, applying, theorising and abstraction (CitationBiggs, 2003, p.48).

One common argument supporting the traditional crit is that it refines presenting skills that will be of value to a practising architect (CitationIlozor, 2006, p.53; CitationAnthony 1991, p.29). Indeed, the UK’s Quality Assurance Agency’s standards for architecture (CitationQAA, 2000) refer to crits as an integral teaching strategy that prepares students for professional practice. However Sara and Parnell contend that the crit is not as effective at developing students’ communication skills as tutors would like to think (2004, p.2) — a view also supported by CitationWilkin (2000, p.105). Whilst the practising architects will have to present and occasionally defend their work, the context and dynamics of the crit and the client presentation were perceived by some as very different. Is the environment of a crit, which all levels identified as confrontational, appropriate for honing the skills for professional presentations, or does it contribute — albeit unwittingly — to the adversarial mindset of the construction industry highlighted by CitationLatham (1994)? Stuart-Murray comments of the architectural critique that, ‘the formative consequences of the negative and confrontational critique ramify from the classroom to the professional office’ (2010, p.9). Tutors should not delude themselves that the crit is as effective in training students to present as they might like to think it is and should in fact be conscious of the seeds of counter-productivity they may be sowing with their students.

Repeated use of one form of assessment is highly questionable as heavy reliance on one method may result in a limited range of learning being assessed (CitationMacLellan, 2001, p.315). It may also disadvantage those whose learning style or personality favour other approaches — such as the naturally shy or nervous student (CitationElton, 2004, p.49). That such a majority of the participants in the focus groups highlighted the adversarial nature of crits suggests that this latter point may be particularly relevant. There are alterative formats to the traditional crit including the process-crit which focuses on the approach to learning as opposed to the outcome. Equally, student-led crits have been proven to facilitate higher-order discussion than traditional ones. Could such alternative formats supplant the traditional crit altogether (CitationParnell, 2003, p.2)?

What are the alternatives to the traditional crit?

If the feedback process were designed from anew, what would it look like? If students can potentially learn as much from their peers’ crits as their own, how can we engage them more actively when cohort sizes and tiredness frequently reduce the student audience of a crit to passive observers of drawings and models they can barely see while dozing at the back?

Previous research has suggested and, in some cases, evaluated alternatives to the traditional crit (CitationStuart-Murray, 2010; Sara and Parnell, 2004; Parnell, 2003; Brindley et al., 2000; White, 2000; Anthony, 1991). CitationBrindley et al. (2000) conclude that they improved students’ communication and demonstrated the need for each crits’ objectives to be made clear, bringing normally implicit learning objectives to the surface. During the focus groups alternative formats to crits were proposed, such as submitting a brochure or closed crits in which work is reviewed by tutors without students present. Interestingly, participants were unanimous in preferring to present their work themselves because it gives them opportunity to defend it — a description symptomatic of the adversarial nature of the crit — and to enter into a dialogue with tutors. This is clearly a strength of the crit. Participants did also suggest, whilst recognising the inherent issue of student numbers, that smaller groups would facilitate greater engagement with each other’s crit, and stated that when they are expected to ask questions themselves it enhanced their levels of concentration and engagement.

Peer reviews

CitationWhite (2000) appraised student-led crits as an alternative to the traditional crit, with the objectives of: increasing student participation, encouraging skills in presentation and asking for feedback and encouraging students constructively to criticise their own work and that of others. All feedback was provided by the student’s peers with the tutor’s role restricted to that of facilitator. However, whilst successful in raising levels of participation, developing critical analysis and increasing constructive criticism, concern was raised over the quality of feedback provided by a student as opposed to a tutor. Also, student critics — whilst having the insight provided by involvement in the design process themselves — were unlikely to be fully aware of the project’s objectives or have an appreciation of wider architectural issues.

There are a number of learning outcomes from peer review. It encourages student autonomy, confidence and deeper learning and develops analytical and evaluation skills (CitationVickerman, 2009, p.222). CitationRamsden (2003, p.199) suggests that structured use of peer review encourages a more responsible and self-critical view of each student’s achievements. CitationStuart-Murray (2010, p.16) also identified that student-led crits showed higher levels of both understanding and participation, as the process is cognitively demanding rather than passive (CitationNicol, 2011). If staff-to-student ratios continue to rise — which is likely as HE funding is cut — peer reviews might present an appealing strategy, however, at a time of increasing fees is it acceptable for students to be adopting the role of critic, in place of tutors? One way to address this issue would be to redefine the objective of peer reviews whereby they become more about students learning critical analysis skills than having their work critiqued. Students are already encouraged to participate in crits, however they tend to be reticent in doing so for a number of reasons surrounding the student-tutor power dynamic, such as not wishing to openly criticise a peer in the presence of tutors (CitationWilkin, 2000, p.105), or not wishing to make inarticulate comments. Observing the work of peers is an intrinsic value of the crit, but would that not be heightened through deeper engagement with it, as demanded by critiquing? Additionally, if peer reviews utilise the module’s assessment criteria it will deepen students’ understanding of them (and their application to project work) through direct involvement in their interpretation. This might be a more effective way to achieve a greater understanding of the assessment process than the use of examples of previous work.

Developing critical evaluation skills is one of the quintessential components of architectural education, in order that students can learn to critique both their own work and that of others. Although tutors might think that the traditional crit achieves this through observation and engagement in the process, such a tacit manifestation is clearly lacking in achieving this aim. CitationParnell (2003, p.2) has suggested combining the two, with students critiquing work first, followed by tutors in the same session but CitationWhite (2000, p.218) suggests that this might suffer from the traditional student-tutor dynamic, either reducing the perceived value of peers’ comments, or ignoring them in favour of the tutor feedback to come.

A solution could be found in peer-led crits where the tutor, acting predominantly as facilitator, also provides feedback on the quality and constructiveness of the peer’s critique. Students would therefore receive both feedback on their design project (from their peers) and have the benefit of critical analysis (from the tutor). This would address White’s questioning of the role of the tutor in peer-led crits (2000, p.218). Through applying evaluation criteria to a range of projects, students will increasingly recognise that quality can be achieved through diverse solutions (CitationNicol, 2011). Increased skills in self-criticism learnt through peer reviews might also help address conflicting feedback from tutors as students become more proficient at weighing up the merits of different arguments. White has identified that peer-led crits developed students’ understanding of tutors’ previous feedback, assisting their interpretation of criticism (2000, p.214). However, if this is the case, there should be a more explicit agenda for developing such skills, as opposed to working on the assumption that students will evolve them through osmosis by being repeatedly exposed to conflicting feedback.

One of the principal issues raised by participants was peers’ written record of the verbal feedback given by the tutors. Currently students at LJMU do not receive any guidance in taking feedback notes, nor do they have structured involvement in the process of providing feedback. By delivering feedback themselves in peer sessions, they might also become more adept at recording it during the tutor-led sessions.

Exhibition reviews

At exhibition reviews students pin up their work and it is reviewed by tutors without them present. From the focus group discussions, the only place seen as appropriate for such a format would be in place of the final crit, which was likened to a competition submission or planning application in professional practice. However, there is a risk that the assessment would be more likely to focus on the final outcome — the work on the wall — as opposed to an evaluation of the design process that the student has engaged in. During Brindley et al.’s trial of exhibition reviews, a tendency was observed of reverting towards the traditional crit (2000, p.112).

The complexity of the crit is in part compounded by its dual role as a vehicle to deliver both formative feedback (interim) and assessment (final). Tutors assume that students know the difference and this is generally supported by the data set. However, that is not to say that the two processes should not be explicitly separated. Interestingly, CitationKnight (2001) proposes that the same method should not be used for formative and summative assessment, as the former encourages openness and the latter inhibits it — a view also supported by this research. Indeed, Knight goes on to propose that the formative element should include methods such as peer assessment. This argument would suggest that the summative element of the crit is removed completely, and that its only function is to provide formative feedback, be it peer or tutor-led. Summative assessment could then take place independently at a portfolio review or final exhibition although this would demand that assessment criteria are structured to ensure that process is evaluated in appropriate depth. This would remove one of the cacophony of issues that the crit seeks to address and — to some extent — would clarify its objectives in the students’ eyes. That the participants were unanimous in their preference of interim crits over final ones adds weight to such a proposition.

Public reviews

As identified above, tutors’ defence of the traditional crit as preparation for presenting to clients, consultants and other bodies in professional practice has been questioned by both this and other research. However, involving the wider public — beyond practising architects as external critics — could address this. Guest critics could be brought in who would represent clients and users for projects and the crit could then explicitly evaluate students’ skills in presenting, in particular to non-specialist audiences, an essential element of professional practice. Ilozor trialled the involvement of consultants, client and user representatives in crits, and concluded that a more representative panel would enrich students’ learning experience rather than over-emphasise their inadequacies, but that such crits are likely to be most appropriate to the higher levels within the course (2006, p.60). Interestingly, Wilkin identifies that involvement of clients and users in crits is generally supported more by students than tutors (2000, p.105).

Alternative layouts to traditional crits

CitationStuart-Murray (2010, p.12) highlighted how changing the layout of the crit can break its formality. Traditionally tutors sit facing the student being reviewed with their backs to the peer audience, so it is hardly surprising that there is a lack of engagement from them. A round-table arrangement may create more debate as the peer group would be facing into the conversation between the student and tutors, and would be particularly suited to interim crits — especially early ones where the format of the work is not necessarily predisposed to being hung on the wall.

Conclusions

Anthony maintains that the crit is the greatest source of architecture students’ dissatisfaction (1991, p.35), and the National Student Survey (MSS) has shown a higher than average dissatisfaction amongst architecture (and, indeed, other art and design) students in their rating of assessment and feedback (CitationVaughan and Yorke, 2009, p.8). Participants in this research made astute and perceptive criticisms about the crit — such as a lack of understanding of assessment process, confrontation and recording feedback — that cause discontent and which they consider to have a detrimental impact on their learning. Their overriding concerns were about good principles of assessment, primarily: transparent and comprehensible process, clear and consistent criteria, a balance of formative and summative assessment, and respectful, balanced feedback.

Some assessment methods are easily comprehended. This is not the case with the crit. Although not a panacea, a transparent approach with explicit requirements and assessment criteria contextualised through examples of a range of previous work can only lessen the current disparity of expectations and improve understanding of how tutors use crits to evaluate work. CitationO’Donovan et al. (2004, p.327) highlight that relative terms, such as assessment criteria, require anchor points to communicate definitive standards and it might be that examples of previous projects could provide such points. Tutors could also use the assessment criteria to structure their formative feedback at interim crits. However, tentative adaptation of the traditional crit can only achieve so much, and the research suggests that the traditional format could be radically overhauled to address some of its numerous shortcomings.

While participants highlighted many negative aspects, they did identify positive learning experiences provided by crits. They value the opportunity to discuss their work and recognise the benefit of immediate feedback, which is preferable to a one-way street of written feedback. A respectful two-way exchange between students and tutors is potentially a powerful learning device (CitationBoyer and Mitgang, 1996; Sara and Parnell 2004, p.1). Through such dialogue it can facilitate an understanding of students’ conceptions of the subject matter and their approach to learning.

Theoretically at least, the crit has some strength as a robust feedback and assessment method. As Ramsden states, ‘being required to … defend one’s own [work] not only increases a student’s sense of responsibility and control over the subject matter; it often reveals the extent of one’s misunderstandings more vividly than any other method’ (2003, p.189). The crit also offers recurrent evaluations of student learning with the intention of understanding their achievements and informing them about their evolving progress on a one-to-one basis — something of a rarity in higher education. CitationElton (1988, p.219) goes so far as to suggest that other subjects should adopt the assessment practices utilised in art and design courses, in view of the way students are treated as individuals. Indeed research in other programmes also highlights the value students place on one-to-one feedback (CitationCrawford et al. 2010, p.1). Whatever changes are made to the crit, these positive aspects should not be lost. Indeed, with the crit so entrenched within architectural education it would be very difficult to dispense with it in any event. However the research does imply that the crit should be fundamentally rethought; a view echoed by CitationAnthony (1991, p.158). Alternative formats to the traditional crit may offer more potential to address the negative aspects identified by the data set.

With the explicit objective of developing critical analysis skills, peer-led crits would also deepen students’ understanding of the assessment process and criteria and potentially — through improved skills in self-criticism — teach students to weigh up conflicting feedback. This would address several of the negative associations of the crit raised through the research. If tutors are reluctant to depart from the tutor-led crit altogether, peer and tutor-led sessions could alternate. Peer crits have been shown to increase students’ understanding of tutors’ comments in traditional crits (CitationWhite, 2000, p.214) further supporting alternating the two. The peer and tutor-led crit should be completely separated from the summative event, reinforcing them as a forum for formative feedback and learning. This is supported by participants highlighting that they felt much more likely to enter into a dialogue with tutors at interim crits, which also suggests a benefit in retaining tutor-led crits at formative stages, and utilising an alternative method for assessment — such as portfolio review.

If retained, the format of the tutor-led crit could be substantially improved. More informal round-table discussions encourage greater peer-participation and more frequent and structured involvement of external critics and post-graduate students would be welcomed — and could be widened to include clients, users and consultants. It is uncomfortably clear that tutors should be more mindful of the manner in which they deliver feedback, and do so in a more constructive and respectful manner. Being critical is relatively easy, but critiquing — making incisive, perceptive observations and providing forward-feeding feedback, let us call it ‘progressive feedback’ — is much more challenging.

Tutors should not assume that the traditional crit is the default option and instead should be open-minded to alternative formative feedback and learning methods, selecting the most appropriate for the skills and knowledge they would like students to develop. CitationRamsden (2003, p.178) proposes that tutors should ask themselves what effect on the outcomes of student learning is the use of a particular assessment method going to have? CitationWhite (2000, p.218) suggests that if the objective is to develop critical analysis skills and constructive criticism then the same crit would not be the most suitable place for tutors’ feedback. Essentially, the format must be tailored to suit the desired learning objectives that it is evaluating — the peer crit to evaluate critical analysis, the public crit to evaluate presentation skills and techniques, the exhibition crit to evaluate holistic representation, the metaphor crit to evaluate conceptual thinking and the process crit to evaluate design methodology.

However both experience and research (CitationBrindley et al., 2000, p.110; CitationWilkin, 2000, p.100) suggest that a departure from the crit, or even adaptation of its format, is subject to inertia and reluctance from some tutors. No doubt this is a significant factor as to why the crit in its traditional guise is sustained as the only — or predominant — model. It is sadly ironic that whilst tutors encourage creativity and innovation in their students, some are reluctant to adopt the same approach in their teaching methods (CitationAnthony 1991, p.120). The crit must be perceived as a creative and flexible event in itself.

Further research

In this research a focus group was selected from each level of the undergraduate course, thus providing a cross-sectional analysis. It is recognised that further focus group sessions with other students from each level would provide an increasingly robust outcome, particularly given that the selection method was on a first-come basis. At LJMU the format of crits remains the same at each level — is this appropriate? Should the format change to respond to students’ evolving abilities? For example, contradiction was noted between this and other research regarding students’ apprehension in crits. A longitudinal study, with the same group of students being interviewed at each successive level, would provide a more sensitised insight as to how students’ opinions of the crit alter during the course of the undergraduate and post-graduate degree.

Clearly one route to develop further would be a similar appraisal of students’ views on alternative crits, particularly peer-led reviews structured so that students were critiqued by tutors in terms of their critical analysis of their peers, as opposed to their work. There appears to be very little — if any — research on the latter area.

The initial research proposal was to include the circulation of a questionnaire among alumni to establish how graduates reflected on the crit, in order to identify if and why they consider it a learning experience that has benefitted them in their professional roles. A questionnaire was proposed (rather than a focus group) due to the disparate locations of potential participants. This strand of research was suspended due to time constraints but the questionnaire has already been developed and could provide further valuable insights into the relationship between the crit and the professional role of an architect, in particular given the conflicting opinions between students within this research and between different research papers cited.

References

  • Anthony K. H. (1991). Design juries on trial: The renaissance of the design studio. New York: Van Nostrand Reinhold.
  • Biggs J. (2003). Teaching for quality learning. 2nd ed.Buckingham: The Society for Research into Higher Education and Open University Press.
  • Blythman M., Orr S. & Blair B. (2007). Critiquing the crit. URL: www.adm.heacademy.ac.uk/projects/adm-hea-projects/learning-and-teachingprojects/critiquing-the-crit (accessed 2 July 2010).
  • Boyer E. L. & Mitgang L. D. (1996). Building community: A new future for architectural education and practice. Princeton: Carnegie Foundation for the Advancement of Teaching.
  • Brindley T., Doidge C. & Willmott R. (2000). Introducing alternative formats for the design project review. In: Nicol D. & Pilling S. (Eds.). Changing architectural education: Towards a new professionalism. London: E & FN Spon, pp. 108-115.
  • Brown S. & Smith B. (1997). Getting to grips with assessment. SEDA special 3. London: Staff and Educational Development Association.
  • Crawford K., Hagyard A. & Saunders G. (2010). Creative analysis of NSS data and collaborative research to inform good practice in assessment feedback. SWAP Report. Higher Education Academy Subject Centre for Social Policy and Social Work. URL: www.swap.ac.uk/docs/projects/nss_report.pdf (accessed 26 January 2011).
  • Elton L. (1988). Student motivation and achievement. Studies in Higher Education, 13 (2), 215-221.
  • Elton L. (2004). A challenge to established assessment practice. Higher Education Quarterly, 58 (1), 43-62.
  • Flemming W. G. (1986). The interview: A neglected issue in research on student learning. Higher Education, 15, 547-563.
  • Ilozor B. (2006). Balancing jury critique in design reviews. CEBE Transactions, 3 (2), 52-79.
  • Knight P. (2001). A briefing on key concepts: Formative and summative, criterion and norm-referenced assessment. Assessment Series No. 7. York: Learning and Teaching Support Network (LTSN) Generic Centre.
  • Koch A., Schwennsen K., Dutton T. & Smith D. (2002). The redesign of studio culture: A report of the AIAS studio culture task force. Columbia: The American Institute of Architecture Students.
  • Kolb D. A. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs: Prentice Hall.
  • Krueger R. & Casey M. (2000). Focus groups: A practical guide for applied research. 3rd ed.London: Sage Publications Inc.
  • Latham M. (1994). Constructing the team: The Latham Report. London: Department of the Environment.
  • Lewis R. K. (1998). Architect? A candid guide to the profession. Cambridge: MIT Press.
  • Litosseliti L. (2003). Using focus groups in research. London: Continuum.
  • Liverpool John Moores University. (2008). Effective practice in assessment: Blue book. 3rd ed.Liverpool: Learning Development Unit.
  • MacLellan E. (2001). Assessment for learning: The different perceptions of tutors and students. Assessment and Evaluation in Higher Education, 26 (4), 307-318.
  • Merton R. K., Fiske M. & Kendall P. L. (1990). The focused interview. 2nd ed.Illinois: The Free Press.
  • Nicol D. (2011). Enhancing assessment and feedback in higher education: Principles and practice. Seminar held at Liverpool John Moores University, 9 February.
  • Nicol D. & Pilling S. (Eds.). (2000). Changing architectural education: Towards a new professionalism. London: E & FN Spon.
  • O’Donovan B., Price M. & Rust C. (2004). Know what I mean? Enhancing student understanding of assessment standards and criteria. Teaching in Higher Education, 9 (3), 325-335.
  • Parnell R., Sara R., Doidge C. & Parsons M. (2007). The crit — An architecture student’s handbook. 2nd ed.Oxford: Architectural Press.
  • Parnell R. (2003). The right crit for the right project: What implications might learning objectives and ethos have for the review process?Trigger paper at Studio Culture: who needs it?CEBE and Concrete Centre conference, Oxford. URL: http://cebe.cf.ac.uk/news/events/concrete/triggers/parnell.pdf (accessed 10 January 2011).
  • Prosser M. & Trigwell K. (2001). Understanding learning and teaching: The experience in higher education. Buckingham: The Society for Research into Higher Education and Open University Press.
  • Quality Assurance Agency. (2000). Honours degree benchmark statements: Architecture, architectural technology and landscape architecture. URL: www.qaa.ac.uk/academicinfrastructure/benchmark/honours/architecture.pdf.
  • Ramsden P. (2003). Learning to teach in higher education. 2nd ed.London: Routledge Falmer.
  • Roberts S. J. (2008). Podcasting feedback to students: Students’ perceptions of effectiveness. URL: www.heacademy.ac.uk/assets/hlst/documents/case_studies/case125_podcasting_feedback.pdf
  • Rowntree D. (1977). Assessing students: How shall we know them. London: Harper & Rowe.
  • Sara R. & Parnell R. (2004). The review process. CEBE Briefing Guide Series 3. URL: http://www.heacademy.ac.uk/assets/cebe/documents/resources/briefingguides/BriefingGuide_03.pdf
  • Snyder B. R. (1973). The hidden curriculum. Boston: MIT Press.
  • Stuart-Murray J. (2010). The effectiveness of the traditional architectural critique and explorations of alternative methods. CEBE Transactions, 7 (1), 6-19. URL: www.cebe.heacademy.ac.uk/transactions/pdf/JohnStuart-Murray7(1).pdf (accessed 8 July 2010).
  • Svensson L. & Theman J. (1983). The relationship between categories of description and an interview protocol in a case of phenomenographical research. Paper presented at the Second Annual Human Science Research Conference, Duquesne University, Pittsburgh, P.A. USA, 18-20 May, 1983.
  • Vaughan D. & Yorke M. (2009). I can’t believe it’s not better: The paradox of NSS scores for art and design. URL: www.adm.heacademy.ac.uk/projects/adm-hea-projects/nationalstudent-survey-nss-project (accessed 7 January 2011).
  • Vickerman P. (2009). Student perspectives on formative peer assessment: An attempt to deepen learning?Assessment and Evaluation in Higher Education, 34 (2), 221-230.
  • Webster H. (2007). The assessment of design project work (summative assessment). CEBE Briefing Guide Series 9. URL: http://www.heacademy.ac.uk/cebe/publications/alldisplay?type=resources&newid=briefingguides/no_09_the_assessment_of_design_project_work&site=cebe
  • White R. (2000). The student-led ‘crit’ as a learning device. In: Nicol D. & Pilling S. (Eds.). Changing architectural education: Towards a new professionalism. London: E & FN Spon, pp. 211-219.
  • Wilkin M. (2000). Reviewing the review: An account of a research investigation of the ‘crit’. In: Nicol D. & Pilling S. (Eds.). Changing architectural education: Towards a new professionalism. London: E & FN Spon, pp. 100-107.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.