Research Engineer, University of Kentucky
This paper addresses the relationship between roadway segment length and the efficacy of Safety Performance Function (SPF) models. As do many states, Kentucky uses the Highway Safety Manual's network screening procedure to develop priority lists for its Highway Safety Improvement Program. This paper demonstrates that choice of average roadway segment length can result in markedly different priority lists — in some cases the overlap between different lists is less than 20 percent. Therefore, the objective of this paper is to report on an investigation of the effect of segment length on the development of SPFs and identify average lengths that produce the best-fitting SPF. Several goodness-of-fit metrics are used to compare 16 different segment lengths using the same roadway network and crash data. These metrics as well as Cumulative Residual (CURE) Plots are used to compare model performance. Very short segments produce model bias while longer segments result in the higher absolute deviation, — neither is desirable. For data from Kentucky parkways (i.e., roads designed like freeways) a segment length of approximately 2 miles (3.2 Km) appears to achieve optimal performance among key metrics.
Abstract: Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process.
Pub.: 05 Nov '16, Pinned: 30 Jun '17
Abstract: Crash Modification Factors (CMFs) are used to represent the effects on crashes of changes to highway design elements and are usually obtained from observational studies based on reported crashes. The design element of interest for this paper is horizontal curvature on rural 2-lane highways. The data for this study came from the Washington State database in the Highway Safety Information System (HSIS). Crash prediction models are developed for curve sections on rural 2-lane highway and the tangent sections up- and down-stream of the curve sections. Different negative binomial models were developed for segments on level grades (<3%), moderate grades (3-6%), and steep grades (>6%) to account for the confounding effects of gradient. The relationships between crashes at different traffic volumes and deflection angles are explored to illustrate how to get estimates of CMFs for increases in the minimum radius, considering the effect of increased tangent length for sharper curves, an effect that is overlooked in the Highway Safety Manual CMF, in addition to the effect of gradient. The results of that exploration indicated that even at different design speeds and deflection angles, the CMF estimates for incremental increases in radius lie within the same range, and that the crash reduction rate (CRR) is higher at segments on higher grades compared to the ones on lower grades.
Pub.: 16 Jun '17, Pinned: 30 Jun '17
Abstract: In several papers, Hauer (1988, 1989, 2000a, 2000b, 2016) has argued that the level of safety built into roads is unpremeditated, i.e. not the result of decisions based on knowledge of the safety impacts of design standards. Hauer has pointed out that the development of knowledge about the level of safety built into roads has been slow and remains incomplete even today. Based on these observations, this paper asks whether evolutionary theory can contribute to explaining the slow development of knowledge. A key proposition of evolutionary theory is that knowledge is discovered through a process of learning-by-doing; it is not necessarily produced intentionally by means of research or development. An unintentional discovery of knowledge is treacherous as far as road safety is concerned, since an apparently effective safety treatment may simply be the result of regression-to-the-mean. The importance of regression-to-the-mean was not fully understood until about 1980, and a substantial part of what was regarded as known at that time may have been based on studies not controlling for regression-to-the-mean. An attempt to provide an axiomatic foundation for designing a safe road system was made by Gunnarsson and Lindström (1970). This had the ambition of providing universal guidelines that would facilitate a preventive approach, rather than the reactive approach based on accident history (i.e. designing a system known to be safe, rather than reacting to events in a system of unknown safety). Three facts are notable about these principles. First, they are stated in very general terms and do not address many of the details of road design or traffic control. Second, they are not based on experience showing their effectiveness. Third, they are partial and do not address the interaction between elements of the road traffic system, in particular road user adaptation to system design. Another notable fact consistent with evolutionary theory, is that the safety margins built into various design elements have been continuously eroded by the development of bigger and faster motor vehicles, that can only be operated safely if roads are wider and straighter than they needed to be when motor vehicles were smaller and moved slower.
Pub.: 18 Jun '17, Pinned: 28 Jun '17
Abstract: The Highway Safety Manual provides multiple methods that can be used to identify sites with promise (SWiPs) for safety improvement. However, most of these methods cannot be used to identify sites with specific problems. Furthermore, given that infrastructure funding is often specified for use related to specific problems/programs, a method for identifying SWiPs related to those programs would be very useful. This research establishes a method for Identifying SWiPs with specific issues. This is accomplished using two safety performance functions (SPFs). This method is applied to identifying SWiPs with geometric design consistency issues. Mixed effects negative binomial regression was used to develop two SPFs using 5 years of crash data and over 8754km of two-lane rural roadway. The first SPF contained typical roadway elements while the second contained additional geometric design consistency parameters. After empirical Bayes adjustments, sites with promise (SWiPs) were identified. The disparity between SWiPs identified by the two SPFs was evident; 40 unique sites were identified by each model out of the top 220 segments. By comparing sites across the two models, candidate road segments can be identified where a lack design consistency may be contributing to an increase in expected crashes. Practitioners can use this method to more effectively identify roadway segments suffering from reduced safety performance due to geometric design inconsistency, with detailed engineering studies of identified sites required to confirm the initial assessment.
Pub.: 24 Jun '17, Pinned: 28 Jun '17
Abstract: Generalized Linear Models (GLM) with negative binomial distribution for errors, have been widely used to estimate safety at the level of transportation planning. The limited ability of this technique to take spatial effects into account can be overcome through the use of local models from spatial regression techniques, such as Geographically Weighted Poisson Regression (GWPR). Although GWPR is a system that deals with spatial dependency and heterogeneity and has already been used in some road safety studies at the planning level, it fails to account for the possible overdispersion that can be found in the observations on road-traffic crashes. Two approaches were adopted for the Geographically Weighted Negative Binomial Regression (GWNBR) model to allow discrete data to be modeled in a non-stationary form and to take note of the overdispersion of the data: the first examines the constant overdispersion for all the traffic zones and the second includes the variable for each spatial unit. This research conducts a comparative analysis between non-spatial global crash prediction models and spatial local GWPR and GWNBR at the level of traffic zones in Fortaleza/Brazil. A geographic database of 126 traffic zones was compiled from the available data on exposure, network characteristics, socioeconomic factors and land use. The models were calibrated by using the frequency of injury crashes as a dependent variable and the results showed that GWPR and GWNBR achieved a better performance than GLM for the average residuals and likelihood as well as reducing the spatial autocorrelation of the residuals, and the GWNBR model was more able to capture the spatial heterogeneity of the crash frequency.
Pub.: 26 Jun '17, Pinned: 28 Jun '17
Join Sparrho today to stay on top of science
Discover, organise and share research that matters to you