1,751
Views
2
CrossRef citations to date
0
Altmetric
Articles

Designing self-evaluation instruments for academic programmes: lessons and challenges

Pages 77-90 | Received 16 Nov 2009, Published online: 04 Jun 2010

Abstract

A study was conducted to design valid and reliable self-evaluation instruments for periodic evaluation of academic programmes of Bolgatanga Polytechnic in Ghana, using evaluation experts and relevant stakeholders of the polytechnic. This paper presents some of the challenges, including those of institutional support, the technical demands of designing the instrument and that of culture. How these challenges were met and the lessons learnt illustrate how self-evaluation was introduced to an institution where previously no course evaluation had been carried out.

Introduction

As a way of ensuring quality of tertiary education in Ghana, all tertiary institutions, both public and private, must obtain accreditation before running any academic programme. The practice has been that the National Accreditation Board of Ghana dispatches a team of experts to an institution that intends to introduce an academic programme at the tertiary level, to conduct an assessment of the basic provision that the institution requires to run the programme. Based on recommendations from the team of experts, the National Accreditation Board may, or may not, accredit the institution's academic programme. This decision is based on the institution's ability to satisfy the regulations set by the National Accreditation Board. If the institution meets all of the requirements, it receives accreditation to run the programme, normally for an initial period of 3 years, after which a reassessment is carried out by the National Accreditation Board before the programme can be reaccredited. As good as this accreditation process may seem, because it ensures that an institution has the necessary resources to run an academic programme before it starts, it can neither guarantee continuous improvement, nor produce the quality education that the country needs in order to develop and become globally competitive. Evaluating provision alone is not sufficient to ensure quality, because quality depends on processes and outcomes, not only inputs. Inputs which do not match the processes are unlikely to achieve the planned outcomes. The question now is, therefore, how then can tertiary institutions in Ghana ensure continuous improvement in their academic programmes since the National Accreditation Board's approach for ensuring quality in tertiary education is inadequate to ensure continuous improvements? Quality education delivery remains Ghana's hope of reducing the high level of poverty in its society as well as becoming competitive in today's knowledge-driven globalised economy. In spite of the fact that Ghana's polytechnic education system has come far when compared to what it was a decade ago, the increasing challenges of the twenty-first century demand that the quality of polytechnic education is re-engineered to make it more responsive to national goals and aspirations as well as global demands. The perception of low quality of polytechnic education in Ghana has generated much debate in the country, occasioned by strike actions and demonstrations by students of the polytechnics to demand the needed recognition for their academic awards, especially by their employers.

Polytechnic education is one of the key subsectors of higher education in Ghana, which is directed towards the provision of technical and vocational education to meet the manpower needs of the country and beyond. The polytechnic institutions are required to provide hands-on training with the necessary skills and competencies so that students might meet the manpower needs of industry and also be able to become self-employed after their training. But the perceived low quality of polytechnic education in Ghana has generated much debate in recent times.

Debate has risen over the quality of Ghana's polytechnic education because quality is a term that has several definitions. Brennan (Citation2001) states, ‘in sum: there are (at least) as many definitions of quality in education as there are categories of stakeholders, times the number of purposes, or dimensions these stakeholders distinguish’. Scheerens, Glas, and Thomas (Citation2003) state that in actual practice, concerns over quality may relate to a good choice of educational objectives (relevance) or the question whether the educational objectives are actually attained (effectiveness). There may also be an emphasis on the fair and equal distributions of educational resources (equity) or a specific concern with an economic use of these resources (efficiency). In an article by Harvey and Green (Citation1993), they distinguish five broad categories of definition: quality as exceptional (excellence); quality as perfection or consistency (zero errors); quality as fitness for purpose (mission orientation and consumer orientation); quality as value for money; and quality as transformation. In analysing a number of aphorisms that have emerged as the common perspective on quality, Bush and Bell (Citation2005) define quality as what the customers say it is as well as fitness for purpose. They define quality by the customer's satisfaction rather than supplier's intention. They also try to intimate that quality is not inherent in the product or the service but it is connected to the use that the consumers make of the product or service. Bradley (Citation1993) refers to quality as an evolving term: what was quality in the past is not quality today and what is quality today will not suffice as quality in the future. This means that educational institutions need to conduct regular self-evaluation to enable them to redefine their quality objectives as well as evaluating their strengths and weaknesses. In the twenty-first century, tertiary institutions are supposed to be ‘learning organisations’ (Birnbaum, Citation1998). They must constantly interact with their internal and external environments to identify their strengths and weaknesses, and position themselves to be competitive globally. Because of internationalisation of tertiary education provision, tertiary education institutions should not limit themselves to only the quality models in their home countries, but must be globally conscious. One of the practices for ensuring improvements in the quality of tertiary education is institutional self-evaluation. Tertiary educational institutions have been described by Mintzberg (Citation1983) as ‘Professional Bureaucracies’, and as such, the ‘operating core’ (lecturers and professors) often have a high degree of professional autonomy over their work which makes externally initiated evaluation of their work challenging. Birnbaum (Citation1998) also states that the dynamic nature of the environment of higher education systems in the twenty-first century makes higher education institutions better described as ‘learning organisations’ rather than the traditional description of ‘professional bureaucracy’.

Evaluation is a familiar term in education. It has proven to be a useful tool in ensuring quality in education. It serves as a ‘mirror’ through which actors in education can view the state of the education system, institution, programme or an individual actor within the education system. Evaluation in education has been defined by Scheerens et al. (Citation2003) as judging the value of educational objects on the basis of systematic information-gathering in order to support decision-making and learning. In dealing with educational evaluation, certain contextual conditions are always at stake. These contextual conditions are what Scheerens et al. (Citation2003) refer to as pre-conditions for evaluation. They identified political will and resistance, institutional capability, organisational capacity and technical capacity as pre-conditions for evaluation. Political will and resistance are important since the evaluation will ultimately lead to value judgements it makes for politically sensitive endeavour. Political sensitivity goes beyond the level of national policy, but occurs at all levels where vested interests are at stake. Education in many societies has been a relatively closed system, with little interference in what goes on in schools and classrooms (Scheerens et al., Citation2003). Teachers as autonomous professionals tend to resist close supervision or systematic review of their work. Since educational evaluation depends very much on the cooperation and the participation of all stakeholders, it is important to consider strategies to improve political commitment to the evaluation process. Institutional capability is also very vital because institutions are ‘the rules of the game’ in a society and they may be distinguished from organisations, which structure ‘the way the game is played’ (Berryman et al., 1997, quoted in Scheerens et al., Citation2003). Examples of institutions are the legal system, property rights, weights and measures, and marriage. In assessing institutional capability, instances of evaluation culture and tradition should be looked for. Organisational capacity, which includes the good functioning of organisations in terms of effective leadership, ability to mobilise financial, material and human resources, and appropriate work practices (Orbac, 1998, quoted in Scheerens et al., Citation2003), should be considered when initiating any evaluation. Technical capacity, another important element to be considered in ensuring successful evaluation, concerns itself with the availability of the required set of skills and expertise to carry out the evaluation. These skills do not only pertain to research methodological and technological skills but also to communicative skills and substantive educational knowledge. The literature on educational evaluation classifies evaluation into two main types. Evaluation can be regarded as internal or external depending on the contractors, fund ers and initiators of the evaluation; the (professional) staff who carry out the evaluation; the persons in the object situation who provide data; and the clients or users or the audience of the evaluation results (Scheerens et al., Citation2003). These categories of participants may overlap, for instance the contractors could be users as well, even though they may not be the only users. External evaluation occurs when the contractors, evaluators and clients are external to the object being evaluated. When all the audiences are situated within the organisational unit which is the object of evaluation, we speak of internal evaluation. The internal evaluation could also be referred to as self-evaluation. The changing demands of consumers of higher education as well as the emergence of ‘knowledge societies’ make self-evaluation one of the appropriate tools with the capacity to ensure continuous improvements in tertiary education. Self-evaluation has proven to be a vital tool for contributing to continuous improvement in education provision, especially tertiary education. Scheerens et al. (Citation2003) provide a very comprehensive definition of self-evaluation as: ‘a type of evaluation where the professionals that carry out the programme or core-service of the organisation (i.e. teachers and head teachers) initiate the evaluation and take responsibility for the evaluation and the evaluation results of their own organisation (i.e. their classes and the school)’. The important terms in the definition are ‘initiate’ and ‘take responsibility’. The idea is that the need to evaluate comes from the professionals themselves and they are also prepared to accept the consequences of the evaluation process and results even though they may use external advisors. It is a process of identifying strengths and weaknesses for initiating quality improvement activities.

In 2007, Bolgatanga Polytechnic decided to institutionalise the self-evaluation of its academic programmes. The biggest challenge at the time was to prepare appropriate instruments to be used for the institutionalisation. Kane (Citation2001) writes that probably the most difficult and time-consuming task in conducting any programme evaluation is developing the instrument to collect the information. Institutions usually search for already existing instruments which may ‘fit the bill’, but often these instruments do not exist, or need major revisions before use. This puts a huge responsibility on institutions that need to do self-evaluation for the first time. The responsibility to institutionalise self-evaluation stimulated an interest in designing self-evaluation instruments for evaluating academic programmes in Bolgatanga Polytechnic. This paper presents some of the challenges and lessons learned from the study.

Methodology for the study

The methodology for the design of the self-evaluation instruments was to first of all develop specific questions which included: which areas of academic programmes should be evaluated? who should be the respondents/participants of the data collection process of the self-evaluation? what should be the methods for data collection? what should be the evaluation standards? what should be the format for presenting the evaluation feedback? and finally, how would the technical adequacy of the instruments be ensured? Providing answers to these questions completed the process of the self-evaluation instrument design.

The first step in answering the design questions above was to organise a series of stakeholder workshops to brainstorm what should constitute the content, respondents, data collection methods, evaluation standards, format for presenting the evaluation results and technical adequacy of the instruments. Stakeholders brought together for these brainstorming activities included teachers, students, alumni, senior administrators and employers of polytechnic graduates.

An extensive review of literature on what constitutes quality areas of academic programmes as well as the design of evaluation instruments was also carried out. Experienced educational practitioners and evaluation experts were also interviewed on what they thought constituted quality areas of academic programmes. A draft version of the instruments was produced out of these activities and given to evaluation experts and educational practitioners to review.

Another series of stakeholder workshops was organised for each stakeholder group to identify quality areas of academic programmes that should be directed to them for evaluation. Validation workshops for all the stakeholders were organised to reassess and validate the instruments. In all, 60 teachers from three different polytechnics teaching in more than 10 different academic programmes; 25 students from three different polytechnics pursuing more than 10 different types of academic programme; 30 administrators from three different polytechnics belonging to top, middle and lower level management; and 25 alumni from three different polytechnics who pursued more than 10 different academic programmes, were involved. Fifteen employers of polytechnic graduates were also involved. The validated quality indicators were used to design survey questionnaires. A final workshop was organised for the aforementioned stakeholders to assess and also validate the survey questionnaires.

Results and discussion

The study produced the self-evaluation instruments in terms of respondents, content, data collection methods, evaluation standards, format for presenting evaluation feedback and technical adequacy of the instruments.

Respondents

Respondents for the instruments identified by stakeholders were administrators, teachers, students, alumni and employers. These respondents were chosen on the basis that they are key stakeholders and thus capable of providing valid and reliable data as far as self-evaluation of academic programmes is concerned. The tasks of the respondents for the instrument include consultation, discussions, approval, examining of data and development of improvement strategies.

The content of the instrument

The content of the instrument identified by the stakeholders involved in the process was categorised into nine quality domains as presented in , namely: curriculum, teaching, learning, assessment, output, outcome, programme organisation, resources and quality assurance of academic programmes. Each domain was operationalised into measurable indicators. With the help of evaluation experts and educational practitioners involved in the process, each respondent group selected quality indicators which should be directed to them for use during the evaluation exercises.

Table 1. Selection of operationalised quality indicators by the respondent groups.

The five stakeholders involved in the design of this instrument and the experts who reviewed the instrument established that the curriculum of the programme was a very vital component to be covered by the instruments, in the sense that it is the basis for the teaching, learning and assessment which eventually lead to the output and outcome of the programme. All the programme outputs and outcomes must first of all be conceived in the curriculum. The knowledge, skills and competencies that students require must also be conceived in the curriculum. This domain deals first and foremost with the rationale of the programme and concentrates on issues such as the relevance of the programme to specific students' needs, industry needs, societal needs and institutional needs. The second aspect of the curriculum domain is about the objectives of the programme. Issues involved in the objectives are specific skills and competencies that students require for job performance, academic progression and their everyday life outside school. The issues of institutional interest, students' expectations, and feasibility and clarity of the objectives are covered as well. The third dimension of the curriculum domain which the instrument covers is the content of the curriculum. The content of the curriculum is the most cardinal of all the issues. It embodies what actually should be imparted to the students. It deals with issues such as: adequate coverage and emphasis of the objectives; appropriate sequence or arrangement of the various courses; sufficient subject matter information; balance between practical and theoretical orientation; appropriate skills and competencies; and gender and ethical issues. The fourth component of the curriculum domain is the instructional strategies. This component handles the issue of how the curriculum content should be delivered to achieve the desired outcomes. It concerns the prescription of instructional roles and learning activities. It also deals with the prescription of appropriate instructional materials. The fifth and the last component of the curriculum domain covered by the instruments is the credit and curricular implications which deal with the credit requirements for successful completion of the programme. It also assesses the skills and experience required to succeed in the programme as well as how the programme fits into the overall study portfolio of the particular department and the polytechnic.

The teaching, learning and assessment domains cover issues relating to the actual impartation of knowledge, skills, competencies and attitudes to the students in the programme by their teachers. As Diamond (Citation1998), puts it, ‘the best curriculum or course design in the world will be ineffective if in our classrooms we do not pay appropriate attention to how we teach and how students learn’. The teaching, learning and assessment domains assess the actual impartation and absorption processes. The indicators for these domains were based on the general characteristics of teaching in higher education as well as characteristics of adult learning. The teaching indicators covered are: the presentation techniques used by teachers; teacher professionalism; teacher competence; teacher use of appropriate teaching materials; teacher encouragement of student participation; teacher attitudes towards students; teacher assistance to students; effect of teaching on students; the use of appropriate teaching modes; the use of group work; and supervision of students' projects. The learning domain covers, among other issues: students' ability to see the relevance of ideas presented in the topics; students' efforts towards learning; students' ability to organise their studies effectively; students' time management towards their studies; students' ability to learn independently; and students' ability to ask important questions for clarification. The assessment domain covers: the appropriateness of assessment instruments and procedures; the fairness and objectivity of the instruments and the procedures; the validity and reliability of the instruments; and the use of the assessment results. The provision of assessment feedback to students is covered as well.

The output and outcome domains concern the end product and the benefit from the programme, respectively. The output domain consists of: proportion of students who are able to complete the programme successfully; the attrition rate; academic achievement in terms of ‘class obtained’; and completion of: programme on schedule. Under the outcome domain, the concentration is on the employment of graduates from the programme; appropriate placement of graduates from the programme; employer satisfaction with the performance of graduates from the programme; and alumni satisfaction with the programme.

The programme organisation domain focuses on issues such as: class and time scheduling; organisation and coordination of field placement and practical sessions; provision of information to both students and staff; provision of guidance and counselling to students; and helping students with social issues.

The resources domain deals with: the availability of qualified teaching staff for the programme; the availability of qualified students, support staff, teaching materials, library facilities, relevant textbooks, furnished laboratories and workshops; and adequate classrooms, adequate offices for staff and residential accommodation for students and staff.

The quality assurance and enhancement domain deals with activities for ensuring, maintaining and improving the quality of the programme. The main areas covered by the instrument are: regular revision of curriculum to be abreast with time; regular assessment of teaching and learning; selection of qualified staff; selection of qualified students; and regular evaluation of the programme for improvement.

Methods for data collection

The study identified both qualitative and quantitative approaches as key data collection methods for self-evaluation. The qualitative approaches include interviews, workshops and focused group discussions with all the relevant stakeholders. These provide in-depth information which is very essential and helpful for quality evaluation. Since the qualitative approach may not provide all the relevant information, the quantitative approaches are often employed in addition and these would involve the use of survey questionnaires. The extent of use of each approach would depend on the domain being evaluated and the stakeholder involved.

Evaluation standards

Every evaluation requires some standards in the form of ‘benchmarking’, norms or criteria which form the basis of judgement. It is often the case that different quality domains use different standards for judgement. The quality of what is observed within each quality indicator would be judged against five levels of scale. The levels were very good, good, neutral, fair and unsatisfactory. These levels represented major strengths; strengths outweigh weaknesses; not in a position to evaluate strengths or weakness; some important weaknesses; and major weaknesses, respectively. The stakeholders involved in the design of the instrument agreed that for a quality domain to be judged a major strength in these instruments, 90% and above of the total number of respondents must rate it very good. A domain that is rated as ‘strength’ in these instruments must have more than 60% of the total number of respondents rate it as either very good or good or both. A domain is judged as weak in these instruments if more than 40% of the total number of respondents rates it as fair or unsatisfactory or both. Any domain in these instruments that has more than 50% of the respondents judging it unsatisfactory is taken as a major weakness that needs to be addressed urgently.

Format for presenting evaluation feedback

The format for presenting evaluation feedback in this instrumentation has two components, namely, a short description of the programme that has been evaluated and a tabular presentation of the results of the evaluation as depicted in .

Table 2. Sample table format for feedback.

The tabular presentation would be done for each respondent group to facilitate comparison of results from different respondent groups, periods and programmes.

Ensuring validity and reliability in the instruments

As data collection instruments are developed, the most important concerns are that the instruments are reliable and valid. Reliability and validity both indicate the extent to which error is present in the instrument. Reliability is an indication of the precision of the instrument (Cronbach, Citation2004), i.e. whether it consistently measures what it is supposed to measure and controls for random error in the measure. On the other hand, an instrument deemed valid is controlling for the systematic error in the measure, i.e. whether the instrument is appropriate with regards to what it is intended to measure. To ensure high validity and reliability, experts who know what should be measured in education were made to review the content of the instrument. Stakeholder agreement on all the components of the instrument was ensured. This was done through stakeholder workshops and focused group discussions and several methods of triangulation in data collection were employed to design the instruments. These were expert review, interviews, workshops, document analysis and focused group discussions. Then, after the stakeholders had approved the instruments, they were piloted on some selected academic programmes of the polytechnic to test the efficacy of the instruments. The Cronbach's Alpha reliability test model was applied to test the reliability of the instruments after a pilot exercise. A coefficient of 0.9 was obtained, which indicated high reliability.

Challenges of the instrument design

The challenges encountered in the instrument design were in three broad categories, namely, institutional support, technical demands of the instrument, and culture.

Institutional support

The institutional challenges encountered during the design mainly concerned stakeholder commitment and financial demands of the processes. The concept of self-evaluation of academic programmes was relatively new in the polytechnic system in Ghana. The only form of evaluation familiar to most polytechnics in Ghana was student assessment of teaching; not the evaluation of an entire academic programme. Much effort had to be put in to explain the concept of self-evaluation in order to be able to elicit the commitment of the internal stakeholders (teachers, students and administrators) of Bolgatanga Polytechnic for the study. Another challenge in this category was management's willingness to release funds for the numerous stakeholder workshops that were held, including payments to experts to review the draft instrument. This was also because in Ghana there is a culture of demanding payment for attending workshops or performing duties other than core functions. This represented a huge financial burden on the whole process.

Technical demands

All evaluation instruments must meet pre-determined technical requirements and this study was no exception. Presented next are some of the technical challenges encountered during the design of the instrument.

The technical challenges of the study began with the quality indicators to be covered by the instrument. The difficulty was how to delineate the areas of academic programmes of the polytechnic to be evaluated. The study gathered extensive literature on what constitutes quality domains of academic programmes of tertiary institutions; much time and energy were spent on synthesising these myriad quality indicators. The challenge became more intense when stakeholder workshops were organised to brainstorm on the quality indicators to be covered by the instrument. Pressure to define what quality means and what types of information should be collected has always existed. However, according to Conrad and Wilson (Citation1986), interest has been heightened by the relatively recent emphasis on programme evaluation for resource reallocation and retrenchment. The study encountered challenges with different perspectives of programme quality as identified by Conrad and Wilson (Citation1986). They identified four different perspectives of quality as: the reputational view; the resources view; the outcomes view; and the value-added view. The reputational view assumes that quality cannot be measured directly and is best inferred through the judgements of experts in the field. The resources view emphasises the human, financial and physical assets available to a programme. It assumes that high quality exists when resources, for example, excellent students, productive and highly qualified faculty, and modern facilities and equipment, are prevalent. The outcomes view of quality draws attention to a range of factors, from resources to the quality of the product. Faculty publications, students' accomplishments following graduation and employers' satisfaction with programme graduates, for example, are indicators used. The value-added view directs attention to what the institution has contributed to a student's education (Astin, Citation1991). The focus of the value-added view is on what a student has learned whilst enrolled. In turn, programmes are judged on how much they add to a student's knowledge and personal development. At the stakeholder workshops to select the quality indicators to be covered by the instrument, the different stakeholder groups initially took entrenched positions for their perspective of quality and wanted the selection of the indicators to concentrate on their own perspective of quality.

These entrenched positions called for more stakeholder workshops to synthesise the different perspectives, which cost the polytechnic a lot of time and money.

Before the study began, the internal stakeholders were made aware that self-evaluation was an internal evaluation, which would enable the internal stakeholders to assess the strengths and weaknesses of the academic programmes of the polytechnic for improvement. The study was confronted with the challenge of selecting respondents who would be objective in providing information on the quality of academic programmes for improvement purposes. When it comes to self-evaluation, most people tend to over-rate themselves, thereby biasing the results. In counteracting this effect, stakeholders were not allowed to select their own core functions for evaluation; for example, teachers were not allowed to select teaching for evaluation because of the tendency to over-rate themselves and bias the evaluation results. There were other aspects of the academic programmes where it was realised that internal stakeholders would not be able to provide the required information for quality improvement; for example, the output of the programme. Some external stakeholders therefore became respondents. The issue of who has the ability to determine what quality of academic programme is required to aid improvement activities was another challenge. This situation was also resolved through stakeholder agreement on who should be respondents for evaluating the instrument. Respondents for the instruments identified by stakeholders were administrators, teachers, students, alumni and employers. These respondents were chosen on the basis that they are key stakeholders and thus capable of providing valid and reliable data as far as self-evaluation of academic programmes was concerned. The roles of the respondents for the instrument included consultation, discussions, approval, examining of data and development of improvement strategies.

The study identified both qualitative and quantitative approaches as key data collection methods for self-evaluation. It was found that different respondent groups would require different data collection methods. The difficulty arose when the different respondent groups had to select the quality indicators to be directed to them for use during the evaluation, to determine which data collection method would be most appropriate. It became a challenge because each stakeholder group had to justify why it had selected particular quality indicators. The process at one point degenerated into an unhealthy series of debates and the researcher had a difficult time moderating consensus-building.

The different respondent groups had different views about quality, and therefore setting standards to measure the quality of the academic programmes became a difficult task. Also, it is often the case that different quality domains use different standards for judgement. After heated debates and arguments, some compromises were reached.

Any evaluation exercise must provide feedback information for decisions to be made based on the results of the evaluation in order for necessary changes to be effected. No matter the perceived usage of the feedback information, the evaluation must generate feedback for those who are entitled to use the information for decision-making. The use of feedback information may differ from person to person and institution to institution. Rossi and Freeman quoted in Visscher and Coe (Citation2002) identify three main usages of evaluation feedback. These are direct or instrumental use, conceptual use and convincing or symbolic use. The instrumental use involves making a decision and/or taking action based on the information in the evaluation feedback. The conceptual usage relates to situations where the evaluative information influences a decision-maker's actions. The symbolic use also concerns the use of the evaluative information in support of one's own viewpoints in discussion with others. Irrespective of the type of usage the feedback information may be put to, the format for presenting the feedback must be appropriate and also present the information with clarity. The challenge at this stage was how to design a composite feedback format that would meet the needs of all the different stakeholder groups, because at the stakeholder workshops, different stakeholder groups preferred different types of feedback reporting formats. After a series of debates, a compromise was reached by all stakeholder groups.

The challenges of developing new and valid instruments for specific use in research and practice can be complex and varied. There must be ways of identifying: the conceptual domain; the role of literature reviews and the key experts; processes used to confirm content validity and clarity of content; the feasibility of the validity process; and the exploration of pertinent ethical issues in developing sound and content valid instruments. To ensure validity and reliability in the instrument, the researcher had to review current and relevant literature on the subject matter that would be suitable to the Ghanaian context. The process of accurate literature review was very taxing. What became even more challenging was identifying experts who knew what should be measured in education and who would review the content of the instrument. Educationists in Ghana with educational evaluation profiles were limited and efforts had to be made to contact experts outside the country, who happened to have supervised the researcher's thesis, to review the instrument. Stakeholder agreement on all the components of the instrument was ensured. This was done through stakeholder workshops, focused group discussions and several methods of triangulation in data collection. Another validity and reliability challenge was the construction of a questionnaire for common understanding by the different respondent groups. Operationalising the indicators into simple statements that could clearly be understood by the respondents was a challenge because of the complex nature of educational constructs. ‘Educational constructs, like those in other social sciences, are complex, consisting of an array of contextual factors which can interact with each other and the variables under study’, writes Kember (Citation2003). Expressing the quality indicators in simple statements sometimes communicated different meanings to the different respondent groups and much work had to be done to simplify the statements for common understanding.

The challenge of culture

The whole process of designing the instrument was affected by culture. Culture has been defined by Hofstede (Citation1991) as the enduring sets of beliefs, values, ideologies and behaviours that distinguish one group of people from another. It is the mental software of a person or group of people. During the process of the instrument design, it was realised that evaluation culture was relatively new in the polytechnics that were selected for the study. For instance, in Bolgatanga Polytechnic, it was discovered that self-evaluation of academic programmes had never been organised. It was noticed that not only self-evaluation of academic programmes, but even student assessment of teachers had never been conducted. Few faculty members had had experience with evaluation of their work as teachers and when they did, the experience was not too positive. The concept of introducing self-evaluation of academic programmes which would involve evaluation of their work as teachers created some apprehension. The apprehension of the institutionalisation of the self-evaluation became intense when during the pilot exercise a student got access to his colleague student's assessment form where the colleague had made uncomplimentary comments about a teacher who was close to him. He then stole the form which his colleague had filled out, made a photocopy for the teacher and mentioned the name of the student who, according to him, had written such damaging remarks about the teacher. This teacher did not take it kindly and started victimising the student until investigation on the victimisation revealed that the student, according to the teacher, had made damaging remarks about him during the pilot exercise of the self-evaluation. The teacher was apparently disturbed because at Bolgatanga Polytechnic, the criteria for teacher promotion, which were yet to be implemented, included student evaluation of teachers. Another aspect of culture that presented a challenge was not institutional but sociocultural, which Hofstede (Citation1991) refers to as ‘power distance’. This concerns the distribution of power through decentralisation and institutional democracy. The Ghanaian society generally has a high degree of ‘power distance’, especially in terms of age and educational differences. This manifested itself during the stakeholder workshops where students were involved. Students are the direct recipients of the quality of the academic programmes that are offered by the polytechnic, and they were expected to be forthright with their views when discussing issues on quality of academic programmes. However, the ‘power distance’ of the Ghanaian culture limited their contribution during the open forums where respondent groups were expected to debate each other's choice of quality indicators. The student group felt that they were in the mix of their ‘big people’ who were above them both in age and education and, as their culture demands, they should not challenge or debate with their elders.

Lessons from the instrument design

This study taught a lot of important lessons to the researcher. The pilot exercise taught the researcher about some limitations with the printed evaluation survey questionnaire. If the evaluation questionnaire had been in an electronic form, it would have been difficult for a student to steal a colleague's assessment form and photocopy it for the teacher being evaluated. This also gives an indication that even in areas where Information and Communication Technology is least developed to facilitate electronic use of an evaluation survey questionnaire, measures should be put in place to ensure that respondents will be assured of sufficient confidentiality when they are involved in evaluation, especially in societies where evaluation is a relatively new process. The stakeholder brainstorming workshops also presented a range of insights on the stakeholders’ perspectives of quality. It was exciting to see employers of polytechnic graduates debating with teachers and administrators of the polytechnic on what should be the focus of the quality of the academic programmes of the polytechnic. Overall, the teachers sounded more academic in their presentations, whilst the employers concentrated on more practical ‘on the job’ issues. The issue of evaluation standards stimulated some interesting arguments from the different stakeholders, but the important lesson to be learnt on setting evaluation standards in polytechnic education is that employers' and students' views should be paramount, because they were able to convincingly argue that they were the end users of the academic programmes. Another lesson learnt was the limitation associated with asking students to debate with their teachers in cultures where there is such a high degree of ‘power distance’. The researcher's experience with evaluation in tertiary education before the study had been in Western Europe where, as a student, he was selected together with other students, lecturers and administrators to assess the quality of the academic programme. Unlike in Western Europe, where students could easily debate with their lecturers without any feeling of intimidation, the situation was different at the polytechnic where this study was conducted because of culture specificity. students' contributions through debating their teachers and administrators were seriously affected by the culture of the area.

An important lesson from this study, which designers of evaluation instruments should not ignore, is the involvement of all relevant stakeholders. The commitment exhibited by stakeholders during the pilot exercise showed the degree of ownership of the instrument by all the stakeholders who were involved in the design. Even though some challenges were encountered during the pilot exercise, stakeholder commitment to the exercise was definitely not one of them. This was an indication that involvement of all relevant stakeholders in the design of evaluation instruments is key to the success of their implementation.

Conclusions

The study concludes that successful evaluation of academic programmes of polytechnics in Ghana has the potential to improve stakeholders’ images of polytechnic education. The stakeholders who participated in this evaluation process expressed their satisfaction with the concept. It was realised that evaluation could improve stakeholder support for programmes of the polytechnic, which is one of the drivers for moving polytechnic education forward in Ghana. Evaluation could also strengthen the collaboration between the polytechnic and industry, if industry is involved in the evaluation process. The pilot exercise produced a very incisive feedback on the academic programmes of the polytechnic from the employers of polytechnic graduates. The concept has really broadened the polytechnic's perspective of quality and what it should do to satisfy its stakeholders. It is expected that other polytechnics in Ghana will adopt the concept of self-evaluation of academic programmes to improve on their quality.

References

  • Astin , A.W. 1991 . Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education , New York : Macmillan .
  • Birnbaum , R. 1998 . How colleges work: The cybernetics of academic organisation and leadership , San Francisco, CA : Jossey-Bass .
  • Bradley , L.H. 1993 . Total quality management for schools , Lancaster, PA : Technomic .
  • Brennan , R. 2001 . An essay on the history and future of reliability from the perspective of replications . Journal of Educational Measurement , 38 ( 4 ) : 295 – 317 .
  • Bush , T. and Bell , L. 2005 . The principles and practice of educational management , London : Paul Chapman .
  • Conrad , C.F. and Wilson , R.W. 1986 . Academic programme review. Association for the Study of Higher Education , Washington, DC : ERIC Clearinghouse on Higher Education .
  • Cronbach , L.J. 2004 . My current thoughts on coefficient alpha and successor procedures . Educational and Psychological Measurement , 64 ( 3 ) : 391 – 418 .
  • Diamond , R.M. 1998 . Designing and assessing courses and curricula: A practical guide , San Francisco, CA : Jossey-Bass .
  • Harvey , L. and Green , D. 1993 . Defining quality . Assessment & Evaluation in Higher Education , 18 ( 1 ) : 9 – 34 .
  • Hofstede , G.H. 1991 . Cultures and organisations: Software of the mind , London : McGraw-Hill .
  • Kane , M. 2001 . Current concerns in validity theory . Journal of Educational Measurement , 38 ( 4 ) : 319 – 342 .
  • Kember , D. 2003 . To control or not to control: The question of whether experimental designs are appropriate for evaluating teaching innovations in higher education . Assessment and Evaluation in Higher Education , 28 ( 1 ) : 89 – 101 .
  • Mintzberg , H. 1983 . Structures in five: Designing effective organisations , Englewood Cliffs, NJ : Prentice-Hall .
  • Scheerens , J. , Glas , C. and Thomas , S.M. 2003 . Educational evaluation, assessment and monitoring: A systemic approach , Lisse, , The Netherlands : Swets & Zeitlinger .
  • Visscher , A.J. and Coe , R. 2002 . School improvement through performance feedback , Lisse, , The Netherlands : Swets & Zeitlinger .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.