1,754
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Constructing a Test Bank for Information Science based upon Bloom’s principles

Pages 1-28 | Published online: 15 Dec 2015

Abstract

This paper outlines an approach to creating questions for a subject-based question bank for use in UK library schools. The authors outline a concept map for information science and describe how Bloom’s taxonomy can be adapted to the creation of higher level questions than the commonly used and simple recall type. Sample questions were created using the International Encyclopedia of Information and Library Science (IEILS) and subjects defined by staff at the Department of Information Science at Loughborough University. A role is suggested for the Learning and Teaching Support Network for Information and Computer Science (LTSN-ICS).

1.0 Introduction

Now that academics within the discipline of information science are increasingly including elements of information technology in the content of their assignments they might be more open than some to the idea of using them in the design, delivery and marking of those assessments. In fact it could be argued that they should be comfortable and experienced in the use of computers. The fact that the discipline is already going through a period of adjustment suggests that the academics concerned are open to change. It is useful to investigate new ideas when other changes are being made, so that if they are considered worthy of implementation all of the procedural and administrative implications can be assessed at the same time.

This paper is an attempt to analyse the issues surrounding the use of computers in assessment, and to apply the knowledge thus gained in an example of one possible means of exploiting the strengths of computers to the benefit of both students and academics within information science.

2.0 Literature Review

2.1 Benefits of computer assisted assessment

2.1.1 Potential to save staff time

Discussions of the implementation of Computer-assisted Assessment (CAA) often talk about saving staff-time by automating marking. The trend in recent years has been for many more students to enter British universities. At the same time, funding per student has declined, and the academics facing an increased marking burden are also being pressured to publish more research to make their institution look good in the Research Assessment ExerciseFootnotei. There is also some concern about the reliability of essay marking, with at least one study suggesting that the correlation between pairs of markers rarely exceeds 0.6Footnoteii. This is a poor correlation and indicates that human markers are fallible, which implies that human marking is unfair to students. Automated marking is much faster than marking by humans, and it has the added advantage that it is consistent in the criteria applied to evaluating answers and therefore reliable.

Several studies have looked into the issue. One replaced a report with an Optical Mark Reader (OMR) test and concluded that time saved on marking began to outweigh time spent on setting at around fifty studentsFootnoteiii. Another, replacing a written exam, put the figure at around sixty studentsFootnoteiv. These figures are reassuringly close, but in fact the situation is better still. As questions are created, they can be stored and re-used, so that the time spent on designing tests is gradually reduced, becoming more selection from existing questions than designing new ones, a process that is three or four times fasterFootnotev. Computers can also provide statistical analysis of questions. This can vary from simple percentages of right and wrong answers to more complex statistics that can correlate student results within cohorts on the basis of age and gender. It must be pointed out that in some subject areas content is changing so rapidly that out-of-date questions must be weeded out regularly.

2.1.2 Course coverage using assessment

Initial investigation suggests that objective tests require a lot of work before they can be considered a worthwhile method of assessment. To a certain degree this is the case, but they have one natural advantage over most other forms of assessment - their ability to assess a wide variety of subjects in a short time. This is based on the answering of a large number of questions across a broad range of topics, rather than a small number of examination questions, for example, that focus on single issues. Assessment of wide coverage of a course’s content may be a better indication of student knowledge, as it prevents the technique of question spotting that can sometimes be used to direct revision too intensively on a minority of a subjects.

2.1.3 Feedback

However, probably the most important benefit to be gained from CAA has yet to be discussed. Student learning is greatly supported and enhanced by useful and effective feedback. Feedback can provide encouragement to students, reinforce and develop correct knowledge and reasoning, indicate areas of weakness in both learning and style, and direct future study in specific areas. For these aims to be achieved, the feedback needs to be provided frequently, concentrated on specific areas relevant to the individual student, and received soon enough after the assessment to be considered relevant and easily assimilated. The effects improve with the level of detail in the feedback, but long-term retention rapidly declines as the length of time between the assessment and the receipt of feedback increasesFootnotevi.

In much the same way as marking, the provision of effective feedback has become an increasing burden on academic staff as student numbers have increased. Also in the same way as marking, automating feedback greatly speeds up the process. If computer based assessment is used, the question design software will normally allow the inclusion of feedback to be shown to the student on selection of a particular response. Each comment only has to be composed and written once, but can be delivered easily to hundreds of students.

Formative assessment does not count toward a module mark unlike summative assessment which does. Faster automated assessment procedures allow more frequent formative assessment and therefore more frequent feedback. In CAA systems, it is directly related to the student’s answer to a particular question, and can be received immediately on answering the question concerned, unlike OMR systems that allow response later.

2.2 Student attitudes to computer assisted assessment

Student comments regarding the use of CAA are usually positive. However, two studies did record negative reactions. The first recorded student protests in Derby’s biology department against the use of negative marking, which gives a minus mark for an incorrect answer. At the same time they expressed a favourable reception for the new types of question, especially graphical ones, and the assessment of a broader range of the syllabusFootnotevii. The second found the most common negative reaction of biology students at Plymouth to be to the presentation of the questions on screen, which forced regular scrollingFootnoteviii. When this was adjusted in the following year 74% of the students said they preferred CAA to other forms of assessment they were being subjected to, and even in the first year 88% of the students liked the instantaneous nature of automated marking.

A liking for rapid receipt of results appears in other surveys, along with their reliability and especially objectivity, and the equally rapid feedback, but there is not always a majority in support of CAA. A group of geography students at Plymouth were almost equally split on the question of whether CAA was better than other methods, although 88% of the group did describe it as fair and the questionnaire asking them to evaluate the whole module did produce a much higher general satisfaction rating than in previous yearsFootnoteix.

2.3 Problems with computer assisted assessment

2.3.1 Resources required

Perhaps the most obvious difficulty with introducing CAA to an academic programme is the provision and maintenance of the computer equipment required. To run assessments on computers requires both software and hardware. The software must be either developed in-house, requiring much staff-time, or purchased, leading to licensing issues on top of the basic cost. There must be enough computers available to meet demand, while in summative assessment there are the additional problems of invigilation being made difficult if they are not all in the same area and some students being disadvantaged if they are not of at least comparable processing speed. One lecturer, wanting 180 machines to reduce the number of test sessions to two and therefore reduce the possibilities for collaboration, had to close the public areas of his university’s computing services departmentFootnotex. Another set of concerns is the potential for the failure of individual computers, the university network server, or even the power supply.

Most of these problems are reduced in effect if computers are used for formative assessment in the students’ own time. However, even in this situation it is advisable to appoint a CAA Officer. Such a person can help staff with designing, arranging, running and evaluating assessments, help students familiarise themselves with the system, co-ordinate and standardise CAA across the institution, and ensure that the necessary resources and technical support are availableFootnotexi.

2.3.2 Limitations in content

There is general agreement in the literature about some things that can be assessed using computers. These are recall of facts, understanding and interpretation of terminology, numeracy, application of formulae and procedures, and reasoning, although attempting this last is felt to produce extremely difficult questions. These can be somewhat laborious for students to understand, because they contain complex relationships and usually are more time-consuming to answer than simple multiple response questions with short possible answers. Advocates of the process, though, do accept that there are skills that computers cannot as yet assess. These include communication skills used in oral language exams or presentations, teamwork, the construction of an argument and originality of thought. Indeed, original thought involves the creation of an unpredicted yet valid answer to a question, which the objective marking of a computer cannot handlexFootnoteii.

2.4 Conclusions

The discussion has tried to present some of the arguments supporting the use of computers in assessment in higher education, and to outline some of the benefits that can follow from their introduction. It has recognised that there are problems to be overcome, though these are not as insurmountable as is sometimes suggested.

3.0 Educational Objectives

3.1 Introduction

The content of questions is just as important to the success of an assessment programme as the means of presenting those questions to the students. Any assessment must have some connection to the course it is part of in order to have any use at all as a tool for judging how well students have learnt the material contained within the course. A programme of summative assessment should assess the entirety of the course concerned, and each of the constituent parts in proportion to their importance within the coursexFootnoteiii.

It follows, therefore, that the planning of an assessment should involve careful consideration of the aims and objectives of the course of which it forms part. Several different methods have been used to divide up the whole of a course of instruction into individual segments that can be assessed. The one most associated with objective tests, and therefore the most appropriate starting point for this discussion, is the taxonomy of learning objectives, of which the most famous example is that produced by BloomFootnotexiv.

3.2 Bloom’s taxonomy

Bloom and his colleagues were attempting to use existing lists of educational objectives to identify the behaviours their creators wanted to assess. In theory, once the desired behaviours were identified and incorporated into a taxonomy, they might be fed back into the creation of more clearly structured and better directed objectives. The structure of the taxonomy that was created, with the major classes and their sub-classes, is shown in the following tableFootnotexv:

Table 1 Bloom’s taxonomy of educational objectives

A brief elaboration is probably useful here. The base of the taxonomy is knowledge, in Bloom’s terms the recall of appropriate material. This is distinguished from the various skills used in the solutions of problems in an assessment, which are gradually built up in order of complexity. The material must be correctly interpreted (comprehension), relevant knowledge applied to the specific situation (application), the material broken down into its component parts to show the relationships between them (analysis), then reorganised and combined with other elements to form a new whole (synthesis), and finally judged according to particular criteria of value (evaluation)Footnotexvi.

3.3 Developments since Bloom

The users of Bloom’s taxonomy over the past forty-five years, in justifying that use, concentrate on the simplicity of the tiered structureFootnotexvii and the ease with which it can be appliedFootnotexviii. Other writers are more critical, and several are prepared to support their views by creating alternative taxonomies. The first of these to be discussed is the RECAP systemFootnotexix. This retains Bloom’s Knowledge, Comprehension and Application as REcall, Comprehension and Application, but subsumes the higher skills within a category called Problem-solving.

The reasoning behind the change is the removal of unnecessary distinctions between Bloom’s higher skills. It is true that analysis, synthesis and evaluation are closely linked in the solution of problems, but they do seem to be distinct stages within the process. However, the description of Bloom’s taxonomy given replaces Evaluation with Design, which sounds much closer to Synthesis, perhaps explaining the desire to combine them and the reverse procedure of AnalysisFootnotexx.

Where RECAP’s advocates accused Bloom of making too many distinctions, another writer argued the reverse, wanting to separate skilled behaviour from factual knowledgeFootnotexxi. Bloom’s taxonomy already does this to a certain extent, so the dispute is over whether knowledge is the base of the skills pyramid or a separate structure. The key is the process by which cognitive skills are developed. Some skills can seem instinctive, individuals, often young children, possessing them to an extraordinary degree without extensive formal training.

The difficulty with the argument is that formal training based on knowledge improves the standard of both those with great natural talents and, if to a lesser extent, those with lesser talents. The explanation may lie in the amazing ability of young children to acquire and internalise knowledge, so that in later life it is taken for granted. Walking, for example, is a skill based on knowledge; it is just that most adults have not felt the need to analyse and identify the knowledge involved.

Another replacement taxonomy is SOLOFootnotexxii. The creators of this schema believed that Bloom’s approach was incorrect. They focused on the evaluation of student responses to assessment questions rather than the design of such questions. This appears to be a conscious choice of reliability over validity, and it may well be true that it is difficult or even impossible to adequately deal with both of these issues in one taxonomy.

Fortunately, as was discussed earlier, computer assisted assessment, and in particular objective tests, have built-in reliability. On this argument, therefore, designers of objective tests have chosen Bloom as a framework, while the devisers of SOLO concentrated on essays and similar exercises in the development of their system.

A third alternative taxonomy in the literature is in fact an adaptation of Bloom’s by one of his collaborators, which has since been applied to computer assisted assessmentFootnotexxiii. In this system the names of the levels have been replaced with verbs that facilitate the creation of learning objectives, as the following table shows:

Table 2 Comparison of taxonomy terms

It is immediately obvious that Bloom’s Synthesis and Evaluation have been reversed in Evaluate and Create, and indeed this seems an improvement on the original order. It is perhaps possible to create a new synthesis from analysed material without evaluating that material, but it would be a mere rearranging of pre-existing concepts without consideration by the student, and therefore of little intrinsic worth or originality.

Creation’s place as the highest level of learning is supported by the fact that it cannot be assessed by objective testsFootnotexxiv. Originality on the part of the students requires the possibility of mutually exclusive, equally correct answers, which is impossible with objective marking. Indeed, Bloom had considerable difficulty in producing objective questions capable of testing synthesis. Only one of his nine sample questions for this level of the taxonomy is objective in form, and he admits that it largely involves application and analysisFootnotexxv. For each of the other levels, including evaluation, the majority of the sample questions are objective ones.

3.4 Concept maps

Another method of producing learning objectives involves breaking down the academic discipline concerned into its basic components, theoretically allowing the communication of a thorough subject knowledge to the students. The first layer is a relatively small number of concepts that are of central importance to the subject but independent from each other. The principle can then be reapplied to each of these central concepts, producing as many layers as is necessary or desirableFootnotexxvi.

The whole is called a concept map, and when done well contains all the essential aspects of the subject and shows how each one relates to the others. It then becomes simple to devise both modules and assessments for the course so that all the important topics are covered.

A concept map allows the division of the course or module into small sections, directing the composition of whole question banks, tests and even individual questions. Once the concept map has identified the material to be included in a test, a taxonomy can be used in an attempt to ensure that students utilise a range of cognitive skills in producing their answers.

4.0 Designing An Objective Test

4.1 Creating a concept map

It could be suggested that an obvious starting point for creating a concept map for the discipline of information science is the International encyclopedia of information and library science Footnotexxvii. There are three types of entry in the encyclopedia. The smallest but most important of the types is that consisting of the nine major articles, described in the preface as the foundations on which the book is builtFootnotexxviii. As a result, these were made the starting point of the attempt to divide the encyclopedia into narrower subject areas. Each of the article titles was used as the heading of one of the areas, although in two cases it was judged necessary to amend the wording to give greater clarity on what the area encompassed.

Some of the supporting articles written by specialist contributors to the encyclopedia, such as that on communications technology, are almost as long as the shorter of the main articles. These other articles make up much of the majority of the encyclopedia, and the analysis was mainly designed as a means of allocating questions produced from their subject matter to particular sections of the database.

The last type of entry is the short definitions. These were composed by the encyclopedia’s editors towards the end of the production process, where they decided that particular terms required more explanation. They were not included in the analysis, for several reasons. Most of them do not exceed a single paragraph, so provided little material for question composition. In many instances, they have no cross-references, so that any allocation to a subject area would have been at the discretion of the present writer. Some such allocations would have been self-evident, and on the occasions where they were necessary for longer articles the problem did not cause a rejection of the material, but it was considered preferable to utilise editorial guidance when this was possible. Finally, their numbers would have placed a significant burden on anyone attempting to use the guide to the encyclopedia quickly and easily, the number of articles already making it lengthy.

4.2 Procedure

There were three stages in the production of the structural analysis. The nine central articles were read to produce a clear understanding of the demarcations used by the encyclopedia, and articles referenced in them were recorded. Then cross-references were used in the other direction, with supporting articles being checked for links to the main ones. Finally, all the supporting articles were read to enable the allocation of those remaining, and to check the appropriateness of those made through cross-references.

The best example of the benefit of the last stage is provided by the article on preservationFootnotexxix. This has a direct cross-reference to the long article on communication, but in the body of the article preservation is described as an umbrella term for a wide variety of collection management responsibilities. The collection management article is directly cross-referenced to that on information managementFootnotexxx, and that does seem a more reasonable association for preservation than communication.

4.3 Results

This section consists of a diagram outlining the structure of the encyclopedia. All of the terms used are titles of articles, with the headings of the nine central articles given in bold, and surrounded by subsidiary concepts in normal type, with the links drawn in. Some topics have been treated as equivalents or at least very similar in concept; this is shown by listing them together on either side of a forward slash. Some attempt has been made to retain the alphabetical structure of the encyclopedia through the arrangement of the headings within the nine subject areas, but occasionally considerations of space have forced its abandonment

Figure 1 Concept map of the discipline of information science

4.4 Validity

It was noted that the encyclopedia’s editors were both staff at Loughborough University and consequently their breakdown of information science might have been influenced by the curriculum of courses at Loughborough University. Fortunately, the design processes of several postgraduate curricula have been reported in journal articles, allowing the testing of this division into subject areas against those in universities across the world. The subjects of the core modules of three curricula are shown in the following table:

Table 3 Comparison of information science core modules

The encyclopedia covers all of the subjects listed in the table. The articles describing the courses of the universities in Montreal and Florida also list optional modules that are available. Most of these are also dealt with, although there are no exact matches for Montreal’s information analysis and database design, and Florida’s information and image management.

Differences of emphasis between the university courses are a good sign. While they perhaps weaken the selection of nine clear concepts central to the encyclopedia’s view of information science, the fact that despite such differences the encyclopedia covers all the relevant subjects is good evidence for its comprehensiveness. In the context of a database of questions, the main purpose of subject divisions is to break up the questions into manageable and at least loosely related groups. The software used for the database is such that questions can be taken from anywhere for use in a test, topics aiding rather than restricting question selection.

4.5 Composing the questions

Almost as many writers provide advice on how objective tests should be constructed as give examples of types of question for use in them. In most cases a straight list of advisable actions is presented, allowing little room for discussion. The best way of indicating something approaching a consensus view of best practice in composing objective test questions is to list the individual items of advice in order of their popularity in the literature, as in below:

Table 4 Frequency of suggestions for good objective test design

The only issue arising from the table seems to be the possible contradiction between positioning the correct answer randomly and arranging the responses logically. Without rolling dice, it is very difficult to achieve random distribution of the correct answer within the distracters. If there is no logical system behind the arrangement of the choices, students may believe that there is a pattern, such as a reasonably equal number of correct answers in each position, and may attempt to act on it. If the responses are always arranged alphabetically, a logical basis for the ordering is readily available for the students, who can concentrate on thinking about the answer.

It is theoretically possible to combine both instructions, randomly selecting the position of the correct answer and then building distracters around it. However, this greatly restricts the options available for distracters, and probably adds a prohibitive amount of effort to question design. Therefore, although random positioning is slightly more popular in the literature, responses in this project were arranged in alphabetical order.

Another set of instructions can be produced from the literature for the use of feedback. Anyone composing feedback is urged to be positive and constructive, explain the reasons behind the student’s mistake, allow time within the test for the reading of feedback, be simple and friendly, direct the student towards further work in the subject area of the question, provide detailed comments on ideas and techniques used in the construction of the question, and focus on a few points to encourage their assimilation.

4.6 Application of a taxonomy

Following the suggestions in the literature, a database of objective questions was created. It was then analysed with one of the taxonomies discussed above. The one deemed the most appropriate for use in this project was that changing the order of Bloom’s levels and indicating the skill represented by each level through a verb rather than a nounFootnotexxxiv. Each question in the database was assessed to see which skill it seemed to require from students.

Purely factual questions were listed under Remember, those involving the appreciation of technical terms or descriptive in nature under Understand, those using examples of particular situations under Apply, those requiring the selection of a best answer from the alternatives or other thinking beyond the bounds of the question under Analyse, and those requiring a judgement on aspects of either the question statement or the choices under Evaluate. The results for each of the subject areas were arranged to produce the following table:

Table 5 Spread of cognitive skills assessed by database subject areas

An obvious omission of Create from the table requires explanation. It was argued in the initial discussion of the taxonomy that the skill Create could not be assessed by objective tests, and none of the database questions seem to contradict this. Were Create to be represented in the table, it would be as an extra column, containing ten zeros, to the right of the others.

Analysis of the table produces a few points of interest. Almost a third of the questions have been classed as Remember questions, the lowest level. This suggests that it is easier to create objective tests assessing basic knowledge. However, it did prove possible to assess higher level skills with a sizeable majority of the questions, and there is only a slight reduction in question numbers as the skill level required to answer them increases. Evaluate is the skill tested by the smallest number of questions, but this is at least in part due to a conscious decision to limit the number of Assertion/reason questions included in the database.

It is gratifying that there are only four zeros in the table, and that they are each in a different column. The questions for each subject area were created on different occasions, and there was a suggestion that different question types were easier to create at different times. This may simply be due to variations in the mood of the question designer rather than indicate anything about the suitability of particular material for certain question types. The one area of concern is the lack of any questions on organisation of knowledge testing the higher levels. In general, though, the division of questions between skill levels in individual subject areas produces numbers that are too small to lead to any definite conclusions, and more questions designed by other researchers would be required before certainty could be approached. Certainly question designers should consider what is being assessed on a particular course or module and endeavour to select the appropriate question types, and with sufficient numbers of questions in each to fairly and comprehensively test students on content.

5.0 Discussion

5.1 Variation in assessment method

The database created in this project is only one way to use computers in assessment. Moreover, CAA is only one among numerous assessment methods, both new and old. The literature on CAA and, with the proviso that a lesser proportion of the whole has been researched for this article, that on assessment in general, tends not to argue in favour of one method to the exclusion of any others. Criticisms are made, but normally to demonstrate the complementary capabilities of different approaches.

There is some evidence that student rankings can be affected by the method of assessment. This does not necessarily mean that certain methods are discriminatory, or even that one method is better at representing student abilities than another. Such claims are the result of an excessive demand for reliability that reduces the advantages of adjusting the assessment methods used to the merely administrative, and easing the workload of academic staff, while desirable, is not sufficient reason in itself for change. If students are asked to do different tasks, is it not reasonable to expect that they will show different levels of aptitude at them? This is true when the content of assessments is considered; no-one expects students to perform at the same level in maths and English exams, for example. The same should be the case when methodology is the variable.

It is also possible that variety in assessment methods is beneficial for student performance in general. It is easy to become jaded when preparing for a series of essay-writing exams, and this may affect final performance. Different methods of examination might result in different preparation methods, reducing the monotony of revision at least slightly and perhaps as a result sustaining student motivation.

5.2 Care in design of learning objectives

In designing all forms of assessment, the most important factor is to ensure validity. The assessment must be linked to what has been taught, and both must be linked to what was considered to be necessary or desirable to teach. Learning objectives are therefore the first in importance of the stages of assessment design as well as the first in chronology.

Research for this project involved the examination of several models for the creation of learning objectives, some taking very different approaches to each other. As with methods of delivering assessment, differences in methods of planning assessment do not necessarily make any one method better or worse than any other. The appropriateness of each depends on the aims and objectives of the educators concerned.

Problems can arise when a model is chosen because it is considered easy to use and widely appliedFootnotexxxv. Bloom’s taxonomy is very widely used in the creation of objective tests, but in the view of some researchers, its use does present certain difficulties. Acceptance of a model by a researcher without considering whether adaptations are required by the particular circumstances of the research exercise is a mistake in any field.

5.3 Publication of questions used in research exercises

Examples of questions were frequently requested from those presenting papers at the recent International Computer Assisted Assessment Conference in Loughborough, and on at least one occasion most of the audience were interested in seeing all of the questions used in the study concerned.

Without access to the questions, it is difficult for readers of a research paper to evaluate the conclusions properly, as question design has such a potentially large influence on student performance. Further, if the ideas of an article are interesting, and the reader wishes to repeat the experiment, it is much easier to design questions from examples than from a mere description of the content, aims or even question types. To overcome the problem it might be necessary to have a secure site that hosts questions which can then be seen only by authorised persons, such as lecturing and teaching staff.

6.0 Recommendation

6.1 Co-operation in question bank creation

There are two levels to co-operation in the area of CAA covered by this project. The first is institutional, and is increasingly being both advocated and implemented, particularly by scientists. CAA officersFootnotexxxvi can provide considerable help to an individual academic seeking to set up a programme of computer assisted assessment. They can provide software necessary for test design, train the academic in its use and provide any necessary technical support. These duties gain greater importance when a CAA programme becomes institutional in scope. It is possible to add to them the familiarisation of students with the software used to deliver the tests, institution-wide standards for question design, supervision and maintenance of a gradually increasing bank of questions, and automated scheduling, running and marking of the tests.

All of this is beneficial, but the dearth of questions remains a fairly common complaint. In the information science context, working within individual institutions effectively restricts the creation of useful questions to the members of individual departments, with limited opportunities for the exchange of ideas and the sharing of material. An advance is suggested by a programme recently begun in the south-west of England, where equivalent engineering departments in several neighbouring universities have begun to set up a shared question bankFootnotexxxvii. The logical progression along this path is to nationally-accessible question banks. This would require central administration, and one way of dealing with this issue would be to use the Learning and Teaching Support Network.

The LTSN is a network of twenty-four subject centres based in higher education institutions throughout the country, set up to promote the transfer of good practices in all subject disciplines, and, more importantly, to act as a distributor of learning and teaching resources, including those involving the implementation of communication and information technologyFootnotexxxviii. One of the subject centres covers Information and Computer Sciences. It is based in the University of Ulster, in partnership with Loughborough, Warwick, Heriot-Watt and North London universities. Therefore the LTSN has in place the beginnings of a national infrastructure required for the development of national question banks on information science subject areas, and if any theoretical advice is needed about the design of either questions or tests, it could be provided by the CAA Centre. The resulting system would look something like the following diagram:

Figure 2 Hypothetical national network of question distribution

Such a system is probably a long way away, if it is ever to be set up, although all the necessary elements are currently in existence. Still, the higher the aims set, the greater the likely progress.

Notes

i Chalkley, Brian. Using optical mark readers for student assessment and course evaluation. Journal of Geography in Higher Education, 1997, 21, 99-100.

ii Black, Paul J. Testing: friend or foe, 1998, pp. 37-38.

iii Jones, Allan. Setting objective tests. Journal of Geography in Higher Education, 1997, 21, 108.

iv Callear, David and Terry King. Using computer-based tests for information science. Association for Learning Technology Journal, 1997, 5(1), 28.

v Angseesing, Joe. Computer-marked tests in geography and geology. In: Dan Charman and Andrew Elmes, eds. Computer based assessment, 1998, ii. pp. 6-7.

vi Azevedo, Roger and Robert Bernard. A meta-analysis in the effects of feedback in computer-based instruction. British Journal of Educational Technology, 1995, 26(1), 57-58.

vii O’Hare, David. Student views of formative and summative CAA. In: Myles Danson and Carol Eabry, eds. Fifth International Computer Assisted Assessment Conference, 2001, pp. 371-385.

viii Ricketts, Chris and Sally Wilks. Is computer assisted assessment good for students? In: Myles Danson and Carol Eabry, eds. Fifth International Computer Assisted Assessment Conference, 2001, pp. 415-423.

ix Charman, Dan and Andrew Elmes. Formative assessment in a basic geographical statistics module. In: Dan Charman and Andrew Elmes. Computer based assessment, 1998, ii. 15-19.

x Hawkes, Trevor. An experiment in computer-assisted assessment. Interactions, 1998, 2(3).

xi Callear and King, ref. 4, 31-32.

xii Duke-Williams, Emma and Terry King. Using computer aided assessment to test higher level learning outcomes. In: Myles Danson and Carol Eabry, eds. Fifth International Computer Assisted Assessment Conference, 2001, p. 181.

xiii Pritchett, Norma. Effective question design. In: Sally Brown, Phil Race and Joanna Bull, eds. Computer assisted assessment in higher education, 1999, pp. 29-30.

xiv Bloom, Benjamin S., ed. Taxonomy of educational objectives: the classification of educational goals, 1956.

xv Condensed from Bloom, ref. 16, pp. 201-207.

xvi Simas, Robert and Ron J. McBeath. Constructing multiple choice test items. In: Ron J. McBeath, ed. Instructing and evaluating in higher education: a guidebook for planning learning outcomes, 1992, p. 166.

xvii Pritchett, ref. 15, pp. 32-33.

xvii Delpierre, G.R. The degradation of higher levels of the cognitive domain and its implication for the design of computer-based question episodes. Studies in Higher Education, 1991, 16(1), 65.

xix As outlined in Cox, Kevin and David Clark. The use of formative quizzes for deep learning. Computers & Education, 1998, 30, 157.

xx The original source for the RECAP model is inaccessible to the present writer. It is therefore unclear where this error in the conception of Bloom’s taxonomy originated.

xxi Carter, Richard. A taxonomy of objectives for professional education. Studies in Higher Education, 1985, 10, 135-149.

xxii Biggs, John B. and Kevin F. Collis. Evaluating the quality of learning: the SOLO taxonomy (Structure of the Observed Learning Outcome), 1982.

xxiii Duke-Williams and King, ref. 14.

xxiv Ibid. p. 181.

xxv Bloom, ref. 16, pp. 182-183.

xxvi Todd, Ross J. and Joyce Kirk. Concept mapping in information science. Education for Information, 1995, 13, 337-340.

xxvii Feather, John and Paul Sturges, eds. International encyclopedia of information and library science, 1997.

xxviii Ibid., p. x.

xxix Ibid. pp. 370-371.

xxx Ibid. p. 63.

xxxi Browne, Mairead. Disciplinary study in information science: a foundation for the education of information professionals. Education for Information, 1986, 4, 311-313.

xxxii Bertrand-Gastaldy, Suzanne, Paulette Bernhard and Jean-Marc Cyr. Reconstructing a masters degree programme in Library and Information Studies: the Université de Montréal experience. Journal of Education for Library and Information Science, 1993, 34, 241.

xxxiii Sherron, Gene T. and Marie B. Landry. Reinventing the bachelor’s degree: call it “Information Studies!” Journal of Education for Library and Information Science, 1999, 40(1), 53.

xxxiv Duke-Williams and King, ref. 14, p. 180.

xxxv Carneson, John, Georges Delpierre and Ken Masters. Designing and managing multiple choice questions, 1998, describing Bloom’s taxonomy.

xxxvi See Callear and King, ref. 4, 31-32 for a useful checklist of CAA Officer tasks.

xxxvii Wellington, Sean, Su White and Hugh C. Davis. Populating the testbank: experiences within the electrical and electronic engineering curriculum. In: Myles Danson and Carol Eabry, eds. Fifth International Computer Assisted Assessment Conference, 2001, pp. 511-520.

xxxviii Learning and Teaching Support Network. (URL: http://www.ltsn.ac.uk). [31.8.2001].

References

  • AstinAlexander W. Assessment for excellence. Phoenix, AZ: American Council on Education, 1993.
  • AzevedoRoger and RobertBernard A meta-analysis of the effects of feedback in computer-based instruction. British Journal of Educational Technology, 1995, 26(1), 57–58.
  • BenceDavid and UrsulaLucas The use of objective testing in first-year undergraduate accounting courses. Accounting Education, 1996, 5, 121–130.
  • Bertrand-GastaldySuzanne PauletteBernhard and Jean-MarcCyr. Reconstructing a master’s degree program in Library and Information Studies: the Université de Montréal experience. Journal of Education for Library and Information Science, 1993, 34, 228–243.
  • BhaleraoAbhir and AshleyWard Towards electronically assisted peer assessment: a case study. Association for Learning Technology Journal, 2001, 9(1), 26–37.
  • BiggsJohn B. and Kevin F.Collins, Evaluating the quality of learning: the SOLO taxonomy (Structure of the Observed Learning Outcome). New York: Academic Press, 1982.
  • BizanichEleanor, et. al. Internet-based testing: a vision or reality? THE Journal, 1997, 25(2). (URL: http://www.thejournal.com/magazine/vault/A1918.cfm).
  • BlackPaul J. Testing: friend or foe. London: The Falmer Press, 1998.
  • BloomBenjamin S. ed. Taxonomy of educational objectives: the classification of educational goals. London: Longmans, 1956.
  • BrownSally and GlassnerAngela eds. Assessment matters in higher education. Buckingham: The Society for Research into Higher Education, 1999.
  • BrownSally and KnightPeter Assessing learners in higher education. London: Kogan Page, 1994.
  • BrownSally RacePhil and BullJoanna eds. Computer assisted assessment in higher education. London: Kogan Page, 1999.
  • BrownSally RacePhil and SmithBrenda 500 tips on assessment. London: Kogan Page, 1996.
  • BrownSally RustChris and GibbsGraham Strategies for diversifying assessment in higher education. Oxford: Oxford Centre for Staff Development, 1994.
  • BrowneMairéad Disciplinary study in information science: a foundation for the education of information professionals. Education for Information, 1986, 4, 305–318.
  • BullJoanna Using technology to assess student learning. Sheffield: Sheffield Universities’ Staff Development Unit, 1993.
  • ButtlarLois and Du Mont.Rosemary Library and information science competencies revisited. Journal of Education for Library and Information Science, 1996, 37, 44–62.
  • CAA Centre. Blueprint for computer-assisted assessment. CAA Centre, 2000. CAA Centre. (URL: http://www.caacentre.ac.uk). [25.5.2001].
  • CallearDavid and TerryKing Using computer-based tests for information science. Association for Learning Technology Journal, 1997, 5(1), 27–32.
  • CarnesonJohn GeorgesDelpierre and KenMasters Designing and managing multiple choice questions. Leicester: CASTLE, 1998. (URL: http://www.le.ac.uk/castle/resources/mcqman/mcqcont.html).
  • CarterRichard A taxonomy of objectives for professional education. Studies in Higher Education, 1985, 10, 135–149.
  • ChalkleyBrian. Using optical mark readers for student assessment and course evaluation. Journal of Geography in Higher Education, 1997, 21, 99–106.
  • CharmanDan and AndrewElmes Computer based assessment. Plymouth: SEED Publications, 1998. (URL: http://www.science.plym.ac.uk/departments/seed/download.htm).
  • Computer Assisted Assessment (CAA) Unit. (URL: http://www.loughborough.ac.uk/service/ltd/flicaa/index.html) [25.6.2001].
  • CoxKevin and DavidClark The use of formative quizzes for deep learning. Computers and Education, 1998, 30, 157–167.
  • CurtisAnita D. The use of computer assisted assessment within the curriculum in Library and Information Studies/Science schools. MA dissertation, Department of Information Science, Loughborough University, 1999.
  • CVCP Universities’ Staff Development and Training Unit Effective learning and teaching in higher education. Sheffield: CVCP, 1992.
  • DansonMyles and CarolEabry eds. Fifth International computer assisted assessment conference. Loughborough: Learning and Teaching Development, 2001.
  • DansonMyles and RobertSherratt eds. Third annual computer assisted assessment conference, 1999. Loughborough: Learning and Teaching Development, 1999.
  • DelpierreG.R. The degredation of higher levels of the cognitive domain and its implication for the design of computer-based question episodes. Studies in Higher Education, 1991, 16(1), 63–71.
  • EdwardsAnne and PeterKnight eds. Assessing competence in higher education. London: Kogan Page, 1995.
  • FeatherJohn and PaulSturges eds. International Encyclopedia of Information and Library Science. London: Routledge, 1997.
  • GibbsGraham Assessing more students. Oxford: The Polytechnic and Colleges Funding Council, 1992.
  • GretesJohn A. and MichaelGreen Improving undergraduate learning with computer-assisted assessment. Journal of Research on Computing in Education, 2000, 33(1). (URL: http://www.iste.org/publishing/jrce/index.html).
  • HabeshawSue GrahamGibbs and TrevorHabeshaw 53 interesting ways to assess your students. 3rd ed. Bristol: Technical and Educational Services, 1993.
  • HawkesTrevor An experiment in computer-assisted assessment. Interactions, 1998, 2(3). (URL: http://www.warwick.ac.uk/ETS/interactions/vol2no3/hawkes.html).
  • HeardSue JacquiNicol and SimonHeath. Setting effective objective tests. Aberdeen, MERTaL Publications, 1997.
  • JonesAlan. Setting objective tests. Journal of Geography in Higher Education, 1997, 21, 106–114.
  • KnightPeter ed. Assessment for learning in higher education. London: Kogan Page, 1995.
  • KnivetonBromley H. A correlational analysis of multiple-choice and essay assessment measures. Research in Education, 1996, 56, 73–84.
  • Learning and Teaching Support Network. (URL: http://www.ltsn.ac.uk). [31.8.2001].
  • LumsdenKeith G. and AlexScott Economics performance on multiple choice and essay examinations: a large-scale study of accounting students. Accounting Education, 1995, 4, 153–167.
  • McBeathRon J, ed. Instructing and evaluating in higher education: a guidebook for planning learning outcomes. Englewood Cliffs, NJ: Educational Technology Publications, 1992.
  • McKennaColleen and JoannaBull. Quality-assurance of computer-assisted assessment: practical and strategic issues. Quality Assurance in Education, 2000, 8, 24–31.
  • Question Mark Perception authoring manual. 2nd ed. London: Question Mark Computing, 1999.
  • OtterSue. Learning outcomes in higher education. London: Unit for the Development of Adult Continuing Education, 1992.
  • RomanSteven. Access database: design and programming. 2nd ed. Sebastopol, CA: O’Reilly, 1999.
  • SherronGene T. and Marie B. Landry. Reinventing the bachelor’s degree: call it “Information Studies!” Journal of Education for Library and Information Science, 1999, 40(1), 48–56.
  • StephensDerek. Computer assisted assessment within the Department of Information and Library Studies at Loughborough University. INFOCUS, 1997-8, 2(2), 7–8.
  • ToddRoss J. and JoyceKirk. Concept mapping in information science. Education for Information, 1995, 13, 333–347.
  • WeaverRuth and BrianChalkley. Introducing objective tests and OMR-based student assessment: a case study. Journal of Geography in Higher Education, 1997, 21, 114–121.
  • WoodRobert. Assessment and testing: a survey of research commissioned by the University of Cambridge Local Examination Syndicate. Cambridge: Cambridge University Press, 1991.
  • ZakrzewskiStan. The Luton experience - implementing wide scale computer-based assessment. INFOCUS, 1997-8, 2(1), 11–12.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.