20,196
Views
122
CrossRef citations to date
0
Altmetric
Methodology Review

Timing and Tracking: Unlocking Visitor Behavior

&
Pages 47-64 | Published online: 16 Apr 2009

ABSTRACT

There is a long history of observing visitors in museums, with the majority of the systematic observational work being done in the past 20 years or so. This article reviews the history of timing and tracking in museums, and provides a detailed description of methods used to record, analyze and report timing and tracking data. New technologies that can be used to improve the data collection and entry process are discussed, and ways in which timing and tracking data can be used to improve exhibit design are suggested.

Origins of Timing and Tracking in Museums

The origin of timing and tracking in museums dates back to the early part of the 20th century, when CitationRobinson (1928) and Melton (Citation1935, Citation1936) did what many consider to be the first systematic observations of museum visitors. They made observations of general patterns of visitation in museums, and Melton talked about the right-turn bias, which is still researched today. After Robinson and Melton, there was very little published in this area until the 1970s (CitationCohen, Winkel, & Olsen, 1977; CitationParsons & Loomis, 1973), and it wasn't until the 1980s that there was a large increase in the number of published studies using visitor observation (CitationFalk, Koran, Dierking, & Dreblow, 1985; CitationPeart, 1984; CitationRosenfeld & Turkel, 1982). The person doing the most research in this area, especially in the area of visitor orientation, circulation, and wayfinding, was Dr. Stephen Bitgood and his colleagues at Jacksonville State University (CitationBitgood, 1988, Citation2003; CitationBitgood, Benefield, Patterson, Lewis, & Landers, 1985; CitationBitgood & Dukes, 2006; CitationBitgood & Patterson, 1986a, Citation1986b; CitationBitgood, Patterson, & Benefield, 1988; CitationBitgood & Richardson, 1986; CitationPatterson & Bitgood, 1988). In fact, Volume 1, Issue 4 of Visitor Behavior, the precursor a couple of times removed from Visitor Studies, was a special issue dedicated to Orientation and Circulation. Suffice it to say that observational measures had become a staple of visitor studies by the end of the 1980s.

The acceptance of visitor observation as a valid and reliable method continued through the 1990s and by then there were enough institutions conducting timing and tracking studies for CitationSerrell (1998) to write Paying Attention, about researching, compiling, and comparing visitor behavior data across 110 exhibitions in various types of museums, zoos, and aquariums. This seminal publication brought timing and tracking of whole exhibitions to the forefront of the visitor studies field, helped to standardize the way in which these data were collected and reported, and at the same time provided comparative data for those already conducting timing and tracking studies.

Serrell's publication also raised some interesting points for debate, however, including the idea that one measure of the success of an exhibition is indicated by 51% of the visitors stopping at 51% or more of the exhibits. To her credit, Serrell included a section in the book titled “Other Points of View”, which included colleagues' comments and critiques of her approach. In a recent article, CitationRounds (2004) noted that visitors typically view only 20% to 40% of an exhibition. He postulated that “partial use of exhibitions is an intelligent and effective strategy” and that visitors' “selective use of exhibit elements results in a greater achievement of their own goals than would be gained by using the exhibition comprehensively” (p. 389). So should we use the 51% solution or be content with visitors' foraging behavior? The approach that we have found most useful has been for each institution to compare across their own exhibitions by routinely gathering the same data in the same manner. The most important thing is to set realistic expectations; just how many exhibits in an exhibition do you expect visitors to attend to, and for how long? While Serrel's (1998) data are extremely useful, only by trying to understand your own visitors in different exhibitions at your own institution will you start to see the patterns that provide those expectations. Despite these issues, there is no doubt that CitationSerrell (1998) provided a means by which we could talk about what success for an exhibition might look like in terms of visitor behavior.

Many timing and tracking studies have been conducted since the 1990s and today they are routinely included as a necessary part of understanding and measuring the success of an exhibition. See http://www.informalscience.com/evaluation/report_list.php for examples of summative evaluations, many of which include timing and tracking. Timing and tracking is now assumed to be an important part of understanding the visitor experience and is included in many publications discussing visitor studies (CitationBorun & Korn, 1999; CitationDiamond, 1999; CitationHein, 1998; CitationLoomis, 1987). Visitor behavior is now routinely investigated, both as part of an overall approach to evaluating an exhibition, or as an important focus of research in its own right (CitationChiozzi & Andreotti, 2001; CitationFalk, 1993; CitationKlein, 1993; CitationKorn & Jones, 2000).

What is Timing and Tracking?

Historically, tracking in museums had to do with visitor circulation (CitationMelton 1935, Citation1936; CitationRobinson, 1928). It was almost exclusively a measurement tool for recording physically where within the institution or an exhibition a visitor went. Some of the earliest studies tracked the wear patterns on carpet, for want of a more accurate method. In more recent times tracking visitors refers more specifically to recording, in a detailed manner, not only where visitors go but also what visitors do while inside an exhibition. It can provide quantitative data in relation to stay times as well as other behavioral data. Visitor tracking can be done unobtrusively, where the visitors are not aware they are being observed, or cued, where visitors are asked if they can be observed.

There are a variety of different ways to observe visitors in museums, and we will make a distinction not on method but on the unit of analysis. For the purpose of this article, timing and tracking refers to following and recording visitor behavior in an area larger than a single exhibit component, usually an exhibition. While it can involve a whole institution, it is much more common for it to occur in a single exhibition. Therefore, single exhibit observation falls outside of the scope of this article, although we do refer to it in places. Additionally, we advocate using timing and tracking in conjunction with other methods, particularly interviews, since timing and tracking on its own sheds little light on why people are behaving the way they do. Using both allows us to study the relationship between what a visitor does and the intended outcomes of the exhibition. However, for the purposes of this article we will focus on timing and tracking as a stand-alone method. There are some evaluation reports posted on www.informalscience.org that show how timing and tracking can be combined effectively with other methods.

Knowing where a visitor moves within an exhibition space is important to museum professionals such as exhibition designers and planners. It enables them to determine how visitors are using the various components of the exhibition, whether the exhibition has good flow, and whether visitors are engaging with the exhibits in the manner intended.

Recent advances in technology have made it possible to record and report visitor behavior much more accurately, and there are many museums that are currently using video technology in this way. However, because most exhibitions cannot be recorded with a single camera angle, the vast majority of video observational studies are for single exhibits or galleries. While video is an excellent tool for this kind of focused study of visitor behavior, it is not feasible for most exhibitions. Attempting to patch together multiple videos to accurately record visitor behavior over a larger area can be extremely frustrating. Since video cannot easily solve the problem of tracking visitors over larger spaces, most museums still rely on a paper-and-pencil method to record how visitors move through exhibitions.

Which Variables Do You Record?

There are a multitude of variables that can be recorded in a timing and tracking study. Which variables should be included will depend on the scope of the particular study, resources available in terms of money and personnel, which variables have been recorded previously, how the results will be used, and the skill of the evaluator and/or data collectors. This is one of the reasons why Serrell's (1998) study was so useful; she was able to offer a standardized approach that included the basic variables to collect in timing and tracking studies.

The variables for timing and tracking studies fall into four categories: 1) stopping behaviors, 2) other behaviors, 3) observable demographic variables, and 4) situational variables.

  1. Stopping Behaviors—This group of variables is used to describe where people went, where they stopped and how they spent their time:

    • Total time in area

    • Total number of stops

    • Proportion of visitors who stop at a specific element

    • A level of engagement scale for specific elements (i.e., high, medium, low)

    • Time (min:sec) of a stop at a specific element

    • “Down time” or non-exhibit related behaviors, such as talking on a cell phone or discussing something not related to the exhibition

  2. Other Behaviors—These often describe what people did, above and beyond the stops and include

    • Visitor path (the route a visitor takes through the space)

    • Social interactions with others in group

    • Social interactions with other visitors

    • Social interactions with docents or volunteers

    • Using hands-on/interactive elements

    • Watching videos

  3. Observable Demographic Variables—It is assumed that there will be a margin of error

    • Estimated age

    • Number of adults and children in party

    • Gender

  4. Situational Variables—These include any situational variables that may affect visitor behavior:

    • Levels of crowding

    • Month or season

    • Day of week

    • Time of day

    • Any special events or programs going on at the museum

    • Any special events or programs occurring in the exhibition

    • Presence of staff, carts, or other related experiences

This is by no means an exhaustive list of variables, but those that are commonly included in timing and tracking studies. Since we are operating on the assumption that the observed visitor is not being interviewed, the demographic variables include only those that can be estimated or recorded through observation. There is, however, a distinct advantage to doing an interview in conjunction with timing and tracking since it is then possible to collect whichever demographic or psychographic variables you think may affect visitors' behaviors.

Stopping or Attending To?

CitationSerrell (1998) described a stop as “a visitor's stopping with both feet planted on the floor and head or eyes pointed in the direction of the element for 2 to 3 seconds or more” (p. 12). Since Serrell's book is a seminal publication on timing and tracking this has in some manner become the standard for the field, even though in her book Serrell mentioned an exception to the stopping rule for very large elements. The “planted feet” criterion can be problematic when specific components are large enough that a physical stop is not required to engage it. For example, aquariums and zoos often have large tanks or enclosures that typically involve strolling by while engaged in viewing them. In art museums, visitors can engage with very large paintings or installations without physically planting their feet. The first author of this article uses “attending to” to define a stop, and planting of the feet is not required. In this way, the components that don't require you to plant your feet will still be counted in the time that visitors spend engaged with the exhibit. This is particularly important with large zoos, aquariums and historic areas. However, because of the increased ambiguity, more training and inter-rater reliability testing is required to make sure each data collector is recording “attending to” in the same manner.

Basics of Timing and Tracking

This section covers the basics and logistics of timing and tracking, as well as some issues related to this method.

Selecting a Visitor/Beginning the Observation

It is important to note that usually one visitor from each group is selected for observation. When dealing with an exhibition it is not uncommon for groups to split up at various times, so the focus needs to be on one visitor. From the time this visitor enters the exhibition, his/her movements are observed and recorded. Typically the data collector will draw an “imaginary” line at the entrance, or entrance points, and select every 3rd visitor that crosses the line. If visitation is low, you might select the next visitor to cross the line once you are ready to begin tracking; if visitation is very heavy, you might select every 10th visitor. The point is to develop a system that ensures as “random” an approach as possible. Many timing and tracking studies observe only adults, those who appear to be 18 years or older, in order to avoid informed consent and ethical issues. Sometimes, however, it is necessary to include children in the sample, for example, if you are studying school groups or looking at a child-themed exhibition.

Conducting the Observation

The distance between the observer and the visitor depends on the aim of the study and the nature of the behaviors to be recorded. If the focus is on the total time spent in a large area or gallery, the distance will be greater than if the focus is on specific behaviors at specific elements. The more detailed the behaviors, the closer the distance. Defining what constitutes an “element” is often driven by how accurately you can determine what visitors are attending to. For example, if the exhibit components are spaced far apart and each requires a physical stop in order to engage with it, then what is recorded as the “element” is rather simple. However, if there is no clear delineation between experiences (e.g., a “wall” containing labels, pictures, hands-on exhibits and videos), then it may be necessary to group these together. If you cannot accurately tell what someone is attending to by the movement of their body and particularly their head, then you need to group exhibit elements together. The distance between the observer and the visitor also depends on the layout of the exhibition.

One method for recording times at specific exhibits using paper and pencil involves one stopwatch displaying a cumulatively running time. When the visitor enters you start the watch, then simply record the times they start and stop attending to each element. For example, if a visitor enters an exhibition and starts attending to a particular exhibit at 1 minute 30 seconds you record that as the “start attending” time for that element and record the time they “stop attending” say 3 minutes 40 seconds. You record both of these numbers on the sheet for that particular exhibit and then enter the times in two different columns of a database. A third column can be used to simply subtract the “start” time from the “stop” time to give the time, in minutes and seconds, a visitor attends to each element. If you don't need times for every single element you can always record these times only for the elements-of-interest. The one problem with only recording time at certain elements is that you cannot calculate a “time attending to exhibits” number, or determine how the time in the exhibition is broken down proportionally into the various types of exhibits. The number of stops can also be calculated by coding any “zero” time as no stop and any other time as a stop. This can then be used to calculate the total number of stops, the average number of stops, etc.

It is also common to include other behavioral measures such as interacting with other visitors or staff, or using hands-on elements. It is possible to record these interactions using a paper-and-pencil method, but care must be taken to ensure that these additional behaviors are recorded accurately. During training, it is important to have multiple data collectors observe the same visitor (or staff person acting like a visitor) to determine when to record a behavior. For example, how many seconds does a visitor have to be talking to a docent for it to count? Also, what do you do if they are simply observing someone in their group doing a hands-on activity—does that count? And what if they are indicating to someone in the group how to do that activity? All of these decisions need to be made beforehand, for if they are not recorded the same way in a consistent manner by all data collectors then the data cannot be used.

Training

The first step is to go through the exhibition with the data collectors and point out which elements are included in the timing and tracking. This is particularly important when exhibits are close together. One approach is to go through, say out loud which exhibit is which, and then have the data collectors do the same for you, since it may take a while to orient to a detailed overhead view map of the exhibition. When the data collectors are clear how exhibit elements and visitor behaviors are defined, we suggest having the researcher/evaluator pretend to be a visitor and have multiple data collectors track them at the same time. Then you can look at any irregularities in how things were coded, if times were very different, etc. Continue until the data collectors have resolved most or all of the differences that occurred. This is also a good way to pilot test the instrument and detect any unforeseen circumstances. It also will enable new data collectors to become comfortable with the idea of observing visitors, to think about the distance they need to keep, etc. You can also have them observe each other so that you can spend more time observing the data collector.

Paper-and-Pencil Timing and Tracking

Using the paper-and-pencil method is the most common form of timing and tracking in museums today, probably because it is more simple and affordable than other options. While cost and ease of use are compelling reasons, we have found some limitations to using paper-and-pencil methods for timing and tracking:

  1. Lack of specificity—Many studies don't include times at each element, because it is difficult for data collectors to accurately record this information. Typically you record only whether a stop occurred or not for each exhibit.

  2. Being obvious to visitors—Writing on clipboards is noticeable.

  3. Use of resources—Transferring the data from the paper to a database can take some time, especially when the sample is large.

  4. Forced to choose—It is almost impossible to accurately record time for different phenomena that occur simultaneously in the exhibition (e.g., you can't easily record total time at an exhibit as well as the portion of that time that was spent talking to a docent. This would require two stop watches). Video recording can facilitate this process if the focus is on one element in an exhibition, but not when recording visitor behavior over a whole exhibition.

Some of these limitations can be addressed by applying handheld computer technology to timing and tracking.

Technology and Timing and Tracking

Other fields have been using cutting-edge technology in their efforts to record behavior for some time. Consumer research has been especially adept at using the latest technologies to understand human behavior, including studying how people shop in grocery stores (CitationUnderhill, 1999). In fact, as a keynote speaker at the Visitor Studies Association Annual Conference in Chicago in 1999, Underhill commented on the connection between his own research in shopping behavior and that of visitor studies. He showed how he used video to observe and analyze shopping behavior in grocery stores.

Technology has allowed for a much more sophisticated and accurate recording of visitor behavior in many areas. Videotaping, provided the area is small enough to be recorded by one camera, can provide a wealth of information and the ability to re-watch segments and recode if necessary. It also enables inter-rater reliability to be measured and the accuracy of observations improved. However, as noted above, this is only possible if the area can be covered with one camera. Video-recording is thus not always feasible for timing and tracking, so the direct observation method persists as the most common way to observe visitors' behavior. However, new techniques have recently become available to support the observation process.

One example of this is the software system, Noldus Observer (www.noldus.com). It was first used in visitor studies, as far as we know, by CitationRoss and Lukas (2005) at the Lincoln Park Zoo. While Noldus Observer has a very powerful desktop-drive system, it can also be loaded onto handheld PCs. This has allowed a computer-driven system to replace the more traditional paper-and-pencil method of recording observations. To this point, there are about half a dozen institutions or organizations using Noldus Observer to record and analyze museum visitor behavior data. Data collectors use a handheld computer and stylus to record the exhibits to which visitors attend. The exhibition is divided first into major areas (first level) and then the exhibits in each area are listed (second level). The data collector selects first the area, then the exhibit the visitor is attending to. A “down time” category can also be included for when visitors are not attending to any of the labeled exhibits. This enables both the amount of time visitors attend to each exhibit, and the order in which they attend to exhibits, to be automatically recorded. It is also possible to program a list of additional variables with drop-down menus that can be used to record demographic variables, conditions during data collection, etc.Footnote 1

Electronic behavioral coding and analysis systems such as Noldus Observer have some distinct advantages over paper-and-pencil measures:

  • More accurate—for example, there is evidence that paper-and-pencil methods underestimate time at exhibits when exhibits are close together;

  • Able to record separate times for concurrent behaviors;

  • Not difficult to learn;

  • No data entry necessary—data are simply downloaded directly into SPSS or other spreadsheet software; and

  • Less obvious—a handheld PC does not attract nearly as much attention as a clipboard.

The most compelling reason to consider using software such as Noldus Observer is that it results in more accurate data. This may not be a problem in simpler applications, such as recording time in large areas. The equipment is relatively expensive (approximately US $1,000 for one license and a handheld computer) but saves considerable time in data entry and provides a more sophisticated level of data collection and analysis than is possible with paper-and-pencil techniques.

CitationMoussouri (2005) presented another interesting application of technology in timing and tracking, the Museum Experience Recorder (MER) system, whereby visitors wore a device to track and digitally reproduce their path through the British Museum. This approach would obviously not be unobtrusive, but it raises the question of how far we are willing to go to get accurate information about visitor behavior; should we ask visitors to wear devices that track their path? It is likely that as these systems become smaller and more unobtrusive they could be used more often. With technologies such as these, timing and tracking can advance to a more efficient and accurate method.

While these technologies require an initial investment in both time and money, they can actually reduce costs over time. For example, the cost of software such as Noldus Observer can be recouped in just a few projects, when you consider data entry costs, either in staff wages or hired out. Experimenting with new technologies is an important way to advance the field and improve upon what we are already doing.

Ethics/Informed Consent

While there may not be a legal requirementFootnote 2 to inform visitors, due to a reasonable expectation of being observed in a public place, many researchers feel that there is an obligation to let visitors know that observations are taking place. Some researchers believe that the visitor should be notified directly (cued) before being observed, while others believe a general announcement suffices. Oftentimes this means posting a sign that says visitors are being observed, although there is some evidence that this will not necessarily ensure that visitors are aware of the observations (CitationGutwill, 2002, Citation2003). There is no single accepted way within the field of visitor studies to inform the visiting public that timing and tracking studies are occurring. However, it is suggested that if the timing and tracking is to be followed by an interview, visitors should be informed at the outset.

Regardless of your philosophical perspective, it is extremely important to know what the legal issues are in your local, state, and federal governments, regarding the unobtrusive observation of people in public places. An important factor is whether the research or exhibition is funded or not. If so, the funding agency, including government agencies, may have specific requirements regarding informed consent for research studies.

There are several “codes” of conduct governing ethical principles in research. The British Psychological Society (2000–2004) and the Belmont Report of 1979 (CitationU.S. Food & Drug Administration, 1998) both discuss the ethical treatment of human participants in research. Both advocate that participants be given all the necessary information in order to make a self-determined judgment about whether they should participate in the research. This is usually referred to as informed consent (CitationUSFDA, 1998). Participation in the research should be completely voluntary and not coerced by the researcher. Both “codes” suggest that the nature or scope of the research should be fully divulged to the participant. The participant should also be informed of the risks associated with participating. In today's changing world, non-university researchers are more frequently turning to Institutional Review Boards (IRBs) to ensure that the study meets the requirements listed above. While not required at this point, this is coming more into play within exhibitions that have acquired their funding through National Science Foundation grants. Also, the Visitor Studies Association (VSA) has a committee looking into the use of IRBs in visitor research. While it may take a while to settle the issue or make significant progress to the point of developing guidelines, it is good news that progress is being made.

Another dilemma, which occurs very rarely, is what to do when a data collector is noticed by the unobtrusively observed visitor. If this does occur it is important to be honest and explain the purpose of what is being done and why; in fact, it is a good idea to have an information sheet explaining the study already prepared. This information sheet should contain the name of the principal researcher and their contact information and having business cards already in hand is also a good idea. Even if the visitor is not aware that they are the one being observed, the observation should end and the circumstances noted on the tracking sheet.

Data Analysis and Reporting

This section addresses data analysis and visual representation of the data. Reporting timing and tracking data in a manner that is accessible, understandable, and useful is extremely important. Some reports consist merely of a list of elements with times or stopping percentages next to them. One step better, but still not as helpful as it could be, is including the numbers on the map that was used to do the timing and tracking. The Going from Tables and Lists of Numbers to Visual Representation section below includes examples of color-coding the map to facilitate the interpretation of the data. Timing and tracking studies enable useful patterns and trends to be identified. Visual representations are often the most useful and accessible way to report these data to the exhibit designers, who are often making decisions about the layout and placement of the various exhibits.

Reporting Time and Percentage

The most basic information resulting from timing and tracking is the amount of time and the number of stops a visitor makes. However, there are various approaches you can use to report these data.

Main variables for reporting time:

  • Total time in the exhibition

  • Time spent in the specific areas (e.g., sections, rooms, etc.)

  • Time at each element (e.g., a specific piece of art, interactive exhibit, label, etc.)

Main variables for reporting stops (or “attending to”):
  • Number of total stops

  • Average number of stops

  • Percentage of exhibits stopped at (or attended to)

  • Percentage of total time stopped at (or attending to) exhibits

  • Percentage of total time stopped at (or attending to) each type of element (e.g., interactive, art, objects, etc.)

The last variable is extremely useful to understand and help set expectations for visitor behavior at various types of exhibits. As an example, this type of analysis was reported by CitationYalowitz and Ferguson (2006) in a summative evaluation of the Sharks: Myth and Mystery exhibition at the Monterey Bay Aquarium (see and ). This information, coupled with data from the aquarium's other summative evaluations, has helped understand the specific behavioral norms of visitors across different exhibitions. In fact, remarkable similarities were found across the different exhibitions in terms of stay times, proportion of time at exhibits, and the types of exhibits where visitors spent their time. The full Sharks: Myth and Mystery summative report can be accessed in the “Evaluation” section at www.informalscience.org.

Table 1 Sharks: Myth and mystery, percentage of total time spent by visitors based on type of exhibit

Figure 1 Percentage of total time spent by visitors in general. Copyright 2006 Monterey Bay Aquarium Foundation.

Figure 1 Percentage of total time spent by visitors in general. Copyright 2006 Monterey Bay Aquarium Foundation.

Sweep Rate Index (SRI)

One of the problems in comparing different exhibitions is that the amount of time spent in an exhibition would be greatly affected by the size of the exhibition. You wouldn't expect someone to spend the same amount of time in a 2,000 square foot exhibition as you would in a 10,000 square foot exhibition. CitationSerrell (1998) addressed this issue using the Sweep Rate Index (SRI), a measure that takes into account and standardizes for difference in the size of exhibitions:

For example, a 5,000 square foot exhibition with a median visit time of 20 minutes would have an SRI of 250. That same exhibition with a median visit time of 30 minutes would have an SRI of 167. By dividing the square footage of the exhibition by time in the exhibition SRI now gives us a measure of how quickly visitors move through an exhibition. The lower the Sweep Rate Index the more time visitors are spending in the exhibition. In her book, Serrell calculated the Sweep Rate Index for over 100 exhibitions, at a variety of institution types. This gives us an excellent comparative set for how quickly visitors, as a group, move through exhibitions. One of the authors of this paper routinely selects comparable exhibitions from the book (in terms of type of museum and size of exhibition) as a comparison measure for the exhibition they are evaluating. For example, you can average the SRI for groups such as science museum exhibitions over 4,000 square feet. This will help answer the “So is that good or bad?” question when you report the total time in the exhibition. However, the most appropriate comparison is with other exhibitions at the same institutions, because of the influence of overall attendance and visitor profiles.

While Sweep Rate Index has provided a much-needed measure, it is not without its critics. Some say that the size of the institution and crowd levels would affect time in the exhibition to a great degree and need to be factored in. For example, imagine the immensity of the Louvre or one of the Smithsonian's main museums and the pace at which a tourist spending half a day there might move through a gallery, compared to how they would move if they were in a smaller art museum with only a couple of galleries. In any case, to determine the success of an exhibition SRI should be used in conjunction with other measures.

Patterns and Trends Among the Data

One of the most effective uses of timing and tracking is in identifying trends and patterns in visitor behavior that can inform the design of future exhibitions. Thus the most important measures for timing and tracking, from a design perspective, are those that can be aggregated in order to reveal the influence of the mix of exhibit elements on visitor circulation, stopping and time spent. For example, after years of systematically collecting timing and tracking data, the Monterey Bay Aquarium discovered that there were distinct patterns for how visitors used different types of elements (CitationYalowitz, 2002, Citation2004; CitationYalowitz & Ferguson, 2006). Over multiple special exhibitions, it emerged that the percentage of visitors attending to specific exhibits consistently showed the same order, from highest to lowest: large live animal tanks, medium live animal tanks, small live animal tanks, hands-on/interactive exhibits, videos, objects, and text-only. While the order was not all that surprising, this information allowed the exhibit team to configure exhibitions in such a way as to reduce visitor crowding and maximize flow. It helped to set realistic expectations for particular types of exhibits. It also allowed the exhibit team to try new approaches with the exhibits and see how the changes affected patterns of visitor behavior. For example, in the Sharks: Myth and Mystery exhibition, a lot of cultural elements were added with a heavy emphasis on using technology. An object theater exhibit about the tale of Mother Stringray was very highly used, so much so that it was the first time since the data were being collected that a special exhibition had a non-living exhibit as the most popular element. This information was extremely useful in understanding that non-living exhibits could be among the most popular exhibits. In the same study it was found that including objects on text panels, such as a fishing pole or snorkeling mask, more than doubled the average proportion of visitors who attended to these types of panels.

Going from Tables and Lists of Numbers to Visual Representation

Using a map to report timing and tracking data is much more effective than lists of exhibits and numbers. This can be done using Microsoft Word or PowerPoint, but specialist software, such as AutoCAD (Computer Aided Design), provides much better options. If the in-house or external designer uses AutoCAD, there may already be a map of the exhibition available and this can be modified to include timing and tracking data. shows the proportion of visitors who stop at certain exhibits, and uses color coding to indicate the hot and cold areas of usage. shows the same exhibition, this time reporting the amount of time people spend at each element, for those who do stop. Both are able to give a more accurate and useful picture of visitor behavior in this area than simply listing the information (see and ).

Table 2 Nearshore exhibition, stopping at each exhibit

Table 3 Nearshore exhibtion, time spent when stopping by exhibit

Figure 2 Sharks: Myth and Mystery exhibition, percentage of visitors stopping at each exhibit. Note: Darker shading indicates areas of relatively high visitor usage (top 1/3 of elements); mid-level shading indicates areas of intermediate visitor usage (mid 1/3 of elements); lighter shading indicates areas of relatively low visitor usage (low 1/3 of elements), in terms of the percentage of visitors who stopped to attend to the element. An example in color can be seen at http://www.informalscience.org/evaluations/report_227.PDF Appendix C. Copyright 2006 Monterey Bay Aquarium Foundation.

Figure 2 Sharks: Myth and Mystery exhibition, percentage of visitors stopping at each exhibit. Note: Darker shading indicates areas of relatively high visitor usage (top 1/3 of elements); mid-level shading indicates areas of intermediate visitor usage (mid 1/3 of elements); lighter shading indicates areas of relatively low visitor usage (low 1/3 of elements), in terms of the percentage of visitors who stopped to attend to the element. An example in color can be seen at http://www.informalscience.org/evaluations/report_227.PDF Appendix C. Copyright 2006 Monterey Bay Aquarium Foundation.

Figure 3 Sharks: Myth and Mystery exhibition, average time spent each exhibit. Note: Darker shading indicates areas of longer visitor stops (top 1/3 of elements); mid-level shading indicates areas of mid-length visitor stops (mid 1/3 of elements); lighter shading indicates areas of shorter visitor stops (low 1/3 of elements), in terms of the average time spent at the exhibit by those who stopped. An example in color can be seen at http://www.informalscience.org/evaluations/report_227.PDF Appendix D. Copyright 2006 Monterey Bay Aquarium Foundation.

Figure 3 Sharks: Myth and Mystery exhibition, average time spent each exhibit. Note: Darker shading indicates areas of longer visitor stops (top 1/3 of elements); mid-level shading indicates areas of mid-length visitor stops (mid 1/3 of elements); lighter shading indicates areas of shorter visitor stops (low 1/3 of elements), in terms of the average time spent at the exhibit by those who stopped. An example in color can be seen at http://www.informalscience.org/evaluations/report_227.PDF Appendix D. Copyright 2006 Monterey Bay Aquarium Foundation.

How Can Institutions Use Timing and Tracking Results to Inform Decisions?

Timing and tracking information is most useful to help the exhibit team understand how visitors are using the various elements of the exhibition. They can then evaluate whether the placement and combination of elements is working as expected (at least in terms of where people stop and spend their time). As you build sets of timing and tracking data across multiple exhibitions, patterns will emerge that can inform decisions about the floor plan of future exhibitions. Other staff including educators in charge of the on-site placement of programs, carts, and other stations can also use timing and tracking data to understand how effective these components have been and to inform decisions regarding the location and frequency of these experiences.

Although the uses of timing and tracking data vary from project to project, in general timing and tracking can be used to

  • determine the relative success of the exhibition and specific exhibits,

  • inform placement of exhibits in future exhibitions,

  • set realistic expectations about time and level of engagement,

  • test whether new exhibit approaches increase the frequency of specific behaviors,

  • understand visitor paths and circulation patterns,

  • identify the types of exhibits that are most attractive to visitors,

  • restructure label systems based on data about visitors' reading behavior, and

  • compare trends and patterns across multiple exhibitions.

CONCLUSION

Timing and tracking has become one of the most consistently used methods in exhibition evaluation, because it is able to indicate the extent to which visitors are behaving in the expected and intended manner. Whether you are using traditional paper-and-pencil techniques, or the most recent technologies, a timing and tracking study provides a wealth of information about the target exhibition, which is also valuable in designing future exhibitions.

ABOUT THE AUTHORS

Steven Yalowitz is a Senior Research Associate at the Institute for Learning Innovation in Edgewater, Maryland. E-mail: [email protected].

Kerry Bronnenkant is a Research Associate at the Museum of Science, Boston.

Notes

a Down time in the exhibition is the percentage of time visitors spent engaged in other behaviors besides attending to the exhibits (i.e., moving between exhibits, looking at a map, having conversations, sitting down, etc.).

1. Noldus can be used for much more than timing and tracking and is actually best known for its application as desktop video analysis software.

2. It is a good idea before using anything less than full informed consent to check with a lawyer to see what the state laws are in this area. For example, before conducting unobtrusive observations at the Monterey Bay Aquarium, Yalowitz checked with the aquarium's lawyer to see what would be legally required and found out that in that particular case there were neither state nor federal laws that required informed consent. There were, however, a host of factors that contributed to this: no videotaping of visitors, no means of identifying individuals included in the data, it being a public place, etc.

REFERENCES

  • Bitgood , S. 1988 . “ Problems in visitor orientation and circulation ” . In Visitor studies 1988: Theory, research and practice , Edited by: Bitgood , S. , Roper , J. T. Jr. and Benefield , A. 155 – 170 . Jacksonville, AL : Center for Social Design .
  • Bitgood , S. 2003 . Visitor orientation: When are museums similar to casinos? . Visitor Studies Today , 6 ( 1 ) : 10 – 12 .
  • Bitgood , S. , Benefield , A. , Patterson , D. , Lewis , D. and Landers , A. . Zoo visitors: Can we make them behave? . Annual Proceedings of the 1985 American Association of Zoological Parks and Aquariums . Columbus, OH.
  • Bitgood , S. and Dukes , S. 2006 . Not another step! Economy of movement and pedestrian choice point behavior in shopping malls . Environment and Behavior. , 38 : 394 – 405 .
  • Bitgood , S. and Patterson , D. 1986a . Orientation and wayfinding in a small museum . Visitor Behavior , 1 ( 4 ) : 6
  • Bitgood , S. and Patterson , D. 1986b . Principles of orientation and circulation . Visitor Behavior , 1 ( 4 ) : 4
  • Bitgood , S. , Patterson , D. and Benefield , A. 1988 . Exhibit design and visitor behavior . Environment and Behavior , 20 : 474 – 491 .
  • Bitgood , S. and Richardson , K. 1986 . Wayfinding at the Birmingham Zoo . Visitor Behavior , 1 ( 4 ) : 9
  • Borun , M. and Korn , R. , eds. 1999 . Introduction to museum evaluation , Washington, DC : American Association of Museums .
  • British Psychological Society . 2000–2004 . Ethical principles for conducting research with human subjects Found at http://www.bps.org.uk/on9/23/2004
  • Chiozzi , G. and Andreotti , L. 2001 . Behavior vs. time: Understanding how visitors utilize the Milan Natural History Museum . Curator: The Museum Journal , 44 : 153 – 165 .
  • Cohen , M. , Winkel , G. and Olsen , R. 1977 . Orientation in a museum: An experimental study . Curator , 20 ( 2 ) : 85 – 97 .
  • Diamond , J. 1999 . Practical evaluation guide: Tools for museums and other informal educational settings. , Walnut Creek, CA : Sage Publications .
  • Falk , J. 1993 . Assessing the impact of exhibit arrangement on visitor behavior and learning . Curator: The Museum Journal. , 36 : 133 – 146 .
  • Falk , J. H. , Koran , J. J. , Dierking , L. D. and Dreblow , L. 1985 . Predicting visitor behavior . Curator. , 28 : 249 – 257 .
  • Gutwill , J. P. 2002 . Gaining visitor consent for research: A test of the posted-sign method . Curator: The Museum Journal. , 45 : 232 – 238 .
  • Gutwill , J. P. 2003 . Gaining visitor consent for research II: Improving the posted-sign method . Curator: The Museum Journal. , 46 : 232 – 238 .
  • Hein , G. E. 1998 . Learning in the museum , London : Routledge .
  • Klein , H. 1993 . Tracking visitor circulation in museum settings . Environment and Behavior. , 25 : 782 – 800 .
  • Korn , R. and Jones , J. 2000 . Visitor behavior and experiences in the four permanent galleries at the Tech Museum of Innovation . Curator: The Museum Journal. , 43 : 261 – 281 .
  • Loomis , R. 1987 . Museum visitor evaluation , Nashville, TN : American Association for State and Local History .
  • Melton , A. W. 1935 . Problems of installation in museums of art , Washington, DC : American Association of Museums . (New Series No. 14)
  • Melton , A. W. 1936 . Distribution of attention in galleries in a museum of science and industry . Museum News , 14 ( 3 ) : 6 – 8 .
  • Moussouri , T. . Automated visitor tracking and data analysis for research and evaluation in museums . Presented at the Visitor Studies Association Annual Conference . Philadelphia, PA.
  • Parsons , M. and Loomis , R. J. 1973 . Visitor traffic patterns: Then and now. , Washington, DC : Office of Museum Programs, Smithsonian Institution .
  • Patterson , D. and Bitgood , S. 1988 . “ Some evolving principles of visitor behavior ” . In Visitor studies 1988: Theory, research and practice , Edited by: Bitgood , S. J. , Roper , T. Jr. and Benefield , A. 40 – 50 . Jacksonville, AL : Center for Social Design .
  • Peart , B. 1984 . Impact of exhibit type on knowledge gain, attitude change and behavior . Curator , 27 : 220 – 237 .
  • Robinson , E. S. 1928 . The behavior of the museum visitor , Washington, DC : American Association of Museums . (New Series No. 5)
  • Rosenfeld , S. and Terkel , A. 1982 . A naturalistic study of visitors at an interpretive mini-zoo . Curator. , 25 : 187 – 212 .
  • Ross , S. R. and Lukas , K. E. 2005 . Zoo visitor behavior at an African Ape exhibit . Visitor Studies Today , 8 ( 1 ) : 4 – 12 .
  • Rounds , J. 2004 . Strategies for the curiosity-drive museum visitor . Curator: The Museum Journal. , 47 : 389 – 412 .
  • Serrell , B. 1998 . Paying attention: Visitors and museum exhibits. , Washington, DC : American Association of Museums .
  • Underhill , P. 1999 . Why we buy: The science of shopping , New York : Simon & Shuster .
  • U.S. Food & Drug Administration . 1998 . Belmont Report; Ethical principles and guidelines for the protection of human subjects of research Found at http://fda/gov/oc/ohrt/IRBS/belmont.html., on9/23/2004
  • Yalowitz , S. S. 2002 . Nearshore exhibition front-end study , Monterey, CA : Monterey Bay Aquarium .
  • Yalowitz , S. S. 2004 . Jellies: Living art summative evaluation , Monterey, CA : Monterey Bay Aquarium .
  • Yalowitz , S. S. and Ferguson , A. 2006 . Sharks: Myth and Mystery summative evaluation. , Monterey, CA : Monterey Bay Aquarium .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.