Abstract
Most metropolitan areas rely on a large detector network to collect real-time traffic data in support of traveler information system. The network usually involves the operation of hundreds or thousands of detectors at a uniformly fixed spacing and it is therefore expensive to operate and maintain. Limited work has been done in the existing studies to investigate the impact of detector spacing on the overall system performance and its economic significance. This article examines the effectiveness of freeway travel time estimation by the widely used midpoint method when different detector spacings are considered. A sixteen-mile section of I-75 in Cincinnati, Ohio, is used as a test bed where alternative detectors are included each time to define a separate spacing scheme in the evaluation. A wide range of traffic conditions is considered including work zones and incidents. Preliminary results show that although the estimation errors generally increase with detector spacing, the commonly used 1/3-mile spacing of speed detectors is not necessarily superior to larger spacings in case of congestion; furthermore, field data show that when using a one-mile spacing in this test bed, the estimated travel times match most closely to those from the floating cars with satisfactory accuracy.
Funding for this research is provided by the Ohio Department of Transportation. Appreciation is also extended to Andrew Fluegemann at ARTIMIS and TranSystems Corporation for base data.