261
Views
0
CrossRef citations to date
0
Altmetric
Editorials

Editorial

Pages i-iii | Published online: 03 Feb 2007

Behaviour and Information Technology (BIT) is an international scientific journal which focuses on the relationship between people and technology. Although there have been many changes in the field of human computer interaction (HCI) since we launched BIT more than twenty years ago, one thing has remained constant (at least in my opinion). The successful development and use of computing technology depends as much on understanding human behaviour as it does on understanding hardware or software. Potentially innovative technologies fail because we cannot or will not use them. Really successful innovations, like text messaging on mobile phones, succeed because they tap into some genuine human need (to communicate, to share experience, to coordinate our activities).

In this context, I believe that our journal plays an important role in providing a single, fully refereed environment for papers from a wide variety of disciplines from psychology and management to computer science. It is no coincidence that the human aspects (behaviour) appear before technology in the title.

BIT publishes papers in a variety of formats and styles, not just traditional, experimental studies. This does not mean less rigorous or less good. It just means different. Our referees are briefed to consider the following criteria, in addition to specific points about the quality of the submission.

1.

BIT is deliberately broad in coverage of the human aspects of information technology, whilst putting the emphasis on a ‘people before technology’ approach.

2.

Papers must be honest. Limited exploratory studies can be quite acceptable, provided that the author is clear about the limitations of the study and does not mislead the reader by extravagant claims for the findings.

3.

Papers must be worth reading. The reasons may be because they contain original results, new arguments, useful reviews, interesting opinions or any other reason that convinces the referees and editors. We are not hidebound by preconceived ideas about acceptability.

The editorial policy of Behaviour and Information Technology is and indeed always has been (since the journal was founded in 1981) that we are actively and deliberately multi-disciplinary and broad based. We have resisted formalising the classifications because we believe that focusing on the ‘human’ in human-computer interaction inevitably involves taking account of many different issues. The user's eyeballs which may be working hard to read computer produced images are not just part of a complex visual information processing system, but also linked to the hands operating the keyboard or mouse, the bottom sitting on a chair and the mind which has knowledge, attitudes, emotions and beliefs.

These cannot be regarded as independent, unrelated aspects. The discomfort arising from badly designed equipment may distract the users and prevent them from concentrating on the cognitive component of their tasks. Conversely, highly motivated individuals may not notice the poor ergonomics of their workplace – at least not at the time.

So we will continue to publish papers from a variety of perspectives and on a range of diverse topics. This issue is no exception with topics ranging from innovative hardware devices to why no two usability experts ever seem to agree.

IMPROVING THE USER INTERFACE

Over the life of BIT, I have witnessed an explosion in our field, both in terms of the numbers of people involved and in the names they use to describe their activities. In the early days, what I did was called computer ergonomics (or human factors in the United States). Now there are people called HCI specialists, usability professionals and even user experience architects although no doubt they would argue that what they do is different (subtly different in my view). However, what all these people generally have in common is a desire to improve the experience of people using technology.

The other trend I have observed is that many people seem to focus solely on what I used to call software usability. Now, it is true that software dictates much of what computer technology does and designing it to be usable is very tricky. But I do believe we often overlook the impact of hardware issues. At least part of what motivates people to select a specific mobile phone is how it feels in their hands – not just the elegance of the predictive text software or the menu structure.

So I am pleased that the first paper in this issue of BIT brings us down to earth by looking afresh at the deceptively simple and common task of scrolling.

Leslie Chipman, Benjamin Bederson and Jennifer Golbeck from the Human-Computer Interaction Laboratory at the University of Maryland compare two different linear input devices – a new slider device they call a SlideBar and a mouse wheel with a standard mouse controlled scroll bar. Their experimental results look promising although, as they argue themselves, the best results seemed to come when users employed a combination of devices (in their experiment, the SlideBar with the mouse wheel). As someone who welcomes individual choice, I welcome both their innovation and also the reminder that even simple tasks like scrolling should not be overlooked in our endeavours to improve user experience.

Wearable computers are currently receiving a great deal of attention. As an avid Personal Digital Assistant (PDA) and laptop user, I can understand why having the computer with you at all times makes a huge difference. One major field of application of wearable computers involves inspection and maintenance tasks on complex physical systems such as aircraft and military hardware. I understand that the US Navy calculated that replacing all the paperwork on one of its warships with CD roms would allow it to float several inches higher in the water. One of the problems with wearable systems is that their displays tend to be limited in size and this can make it difficult to provide sufficient contextual information to users. The risk then is that they follow the instructions far too literally without really thinking about what they are doing. Jennifer Ockerman and Amy Pritchett from the Georgia Institute of Technology, Atlanta, report a series of studies where they were able to improve user's performance on procedural tasks by presenting contextual information on the small displays. Their results were rather mixed but suggested that the medium itself was not the key factor.

Of course, just because I believe that hardware is important, does not mean that we can ignore software. Chris Condon, Mark Perry and Robert O'Keefe from the Department of Information Systems and Computing at Brunel University in London describe a semiotic analysis of the ‘save as … ’ command in Microsoft Word. Semiotics (the study of signs and symbols) provides a number of analytical techniques for exploring users' understanding of such commands and the authors argue that their techniques allow them to predict where users will have difficulty, especially when faced with inconsistency in the interface.

IT'S NOT ALL GOOD

Every benefit from technology seems to have a down side. I mentioned earlier the postive impact of text messaging. In my own family the text message and the mobile phone allow us to remain part of our grown up children's daily lives in ways which used to only be possible when everyone lived nearby.

But, as a regular train user, I am only too aware of the down side. In my one hour (when the trains are running properly) journey from London, I learn far more about my fellow commuters' personal and business lives than I ever wanted.

Andrew Monk, Jenni Carroll, Sarah Parker and Mark Blythe from the Department of Psychology, University of York explored this problem with in an intriguing way. They exposed members of the public to actors carrying on loud conversations on trains and at bus stations both face-to-face and on mobile phones. Sure enough they found that the mobile phone conversations were more annoying especially on the trains, and go on to explore ways in which manufacturers and train operators can help ensure users behave more considerately than some of the commuters with whom I share my train.

One of the recurring themes in BIT over the year has been the relative difficulty of reading text on a computer display versus reading the same text on paper. Initially, there were very obvious image quality differences which could explain such findings. But as cathode ray tube (CRT) computer displays have caught up in terms of luminance, contrast, resolution, character definition and so on, the advantage of paper has remained. Most people I know printout documents in order to proof read them carefully. Kate Garland from the School of Psychology at the University of Leicester and Jan Noyes from the Department of Experimental Psychology at the University of Bristol explore this issue in some detail and found that over time, with close matching of both sources, the differences in performance became insignificant. However, they still found that users behaved differently and go on to try to explain these differences in terms of the cognitive interference caused by the CRT monitor characteristics of refresh rates, fluctuating luminance, and contrast levels.

Antti Oulasvirta from the Helsinki Institute for Information Technology, and the University of Helsinki and Pertti Saariluoma from the Agora Human Technologies Center, University of Jyvaskyla also in Finland describe some studies on another annoying fact of computer usage – the distracting effect of interrupting messages. They report a number of studies which demonstrate how interrupting messages can disrupt long term working memory and discuss a number of the design implications which follow from this analysis.

Finally, in this issue of BIT, a paper about why usability experts never seem to agree. Rolf Molich from DialogDesign in Denmark and Meghan Ede, Klaus Kaasgaard and Barbara Karyukin from Wells Fargo, Yahoo! and Xerox in the United States respectively report a comparative usability study by nine different facilities. The results make slightly depressing reading for those who want industry to take usability more seriously. Unsurprisingly, a wide variety of methods and approaches were used. However, out of a total of 310 different usability problems, only two of them were reported by six or more organisations, while 232 problems (75%) were only identified by one of the groups – and some of these were categorised as serious. They draw a number of conclusions – from the obvious ‘not all usability tests and testers are equal’ to the less obvious ‘place less focus on finding all problems’. They argue that despite their findings, usability testing is still worthwhile but that testers should focus on the most important issues and pay more attention to their own quality control. Having interviewed and rejected for employment a number of people who claim to be able to perform competent usability tests, I would certainly agree.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.