Open Access Published by De Gruyter Open Access June 9, 2021

Extraction of road boundary from MLS data using laser scanner ground trajectory

Lichun Sui, Jianfeng Zhu, Mianqing Zhong, Xue Wang and Junmei Kang
From the journal Open Geosciences

Abstract

Various means of extracting road boundary from mobile laser scanning data based on vehicle trajectories have been investigated. Independent of positioning and navigation data, this study estimated the scanner ground track from the spatial distribution of the point cloud as an indicator of road location. We defined a typical edge block consisting of multiple continuous upward fluctuating points by abrupt changes in elevation, upward slope, and road horizontal slope. Subsequently, such edge blocks were searched for on both sides of the estimated track. A pseudo-mileage spacing map was constructed to reflect the variation in spacing between the track and edge blocks over distance, within which road boundary points were detected using a simple linear tracking model. Experimental results demonstrate that the ground trajectory of the extracted scanner forms a smooth and continuous string just on the road; this can serve as the basis for defining edge block and road boundary tracking algorithms. The defined edge block has been experimentally verified as highly accurate and strongly noise resistant, while the boundary tracking algorithm is simple, fast, and independent of the road boundary model used. The correct detection rate of the road boundary in two experimental data is more than 99.2%.

1 Introduction and related works

Mobile laser scanning (MLS) is an advancing technology acquiring high-density object data more efficiently within detectable distances from moving platforms [1,2,3,4,5]. MLS has been widely applied in numerous fields including urban planning and management [6,7], intelligent transportation [8,9,10,11], and highways [12,13]. Extracting road boundary from MLS data is challenging for complex scene environments [4,9] with huge data volumes [2,14], occlusion [4,13], and the heterogeneous, sparse, and noisy states of point clouds [14].

To establish the topological relationship of point clouds, studies often project three-dimensional (3D) point clouds onto two-dimensional (2D) planes to form grid images. Thus, a series of image processing techniques such as feature extraction [1,15,16], image segmentation [8,17,18], and mathematical morphology [12,19] can be performed based on neighborhood pixels. For example, the elevation difference within a grid or adjacent pixels is often used as a boundary indicator [16,18,20,21,22,23,24]. It is more intuitive to distinguish raised edges from road cross-sections [3,23,25], while vehicle trajectories provide the direction of projection for edge generation. Xia et al. identified holes, detected break-lines from elevation image generated by point cloud, and matched patches along road for holes [26]. Constructing a 3D raster map with a unit volume called voxel (volume pixel) [5,8,27] is the other useful tool to simplify high-density point clouds. For example, a greater density gradient in more than one direction implies the existence of a road edge in ref. [5]. Considering the uneven density of raw data, Lin et al. generated super-voxels with adaptive resolution to preserve edges with smaller segmentation error [27]. Sha et al. further improved Lin’s method using geometric information to cluster the points when merging the point label into its neighbor representative points [28]. Xu et al. classified point clouds using trend features of super voxels [29].Despite its simplicity and convenience, converting point clouds into voxels may cause information loss and, consequently, reduces the accuracy [5].

On the contrary, extracting boundary directly from raw point clouds has gained increasing attention from point clouds experts. In this regard, k-dimensional (KD) trees [2,14,30] have been widely used to accelerate the nearest neighbor search in data organization. Numerous methods based on scan lines that use the original record order have been developed for road boundary extraction. A scan line was divided into several segments in ref. [2,20,31]. In addition, road segments were detected by line-based region growth and then fitted into straight lines [8] or polynomial curves [15,17,32] have detected boundary points using moving window operators on a scan line or adjacent lines based on abrupt changes in elevation, point density, surface roughness [13], and the angles formed by three consecutive points [21,33]. In most cases, the window size was fixed manually, and multiple thresholds were defined from experience and adjusted frequently to accommodate distinct point cloud distributions.

In these methods, candidate boundary points are usually extracted from segments or scan lines. The final boundary is determined via clustering, fitting, or boundary tracking algorithms. Continuous closed polygons is extracted in ref. [10] as a road boundary. Active contour models were applied in ref. [6] to fix the optimal road boundary on projected 2D raster images of elevation, reflectance, and pulse width. A discrete Kalman filter [11,16] and α-shape [3,4,8] algorithms are often adopted to identify and track boundary points. To reduce errors in tracking, multiple constraints are imposed to achieve desired edge clusters, e.g., the distance between adjacent points, direction of historical boundary formed by previously identified points [16], length constraints [1], and collinearity conditions of adjacent segments [15].

Although plenty of algorithms have been proposed for road boundary extraction from MLS data, there is a need for a framework to efficiently extract the road boundary. In majority methods, vehicle trajectories have been used as a reference to locate the road boundary [3,6,20,34,35]. Thanks to the road design parameters, unnecessary data located far from the road can be easily filtered out [9,17,18,21,36]. In addition, huge point clouds can be partitioned into several segments [8,37,38] or cross-sectional profiles [3,17,23,39] along the trajectory direction. However, trajectory data may not be packaged in the original “.las” files. Therefore, instead of using external data, this study estimates the scanner ground trajectory directly from raw point clouds, which is one of the key contributions of the paper.

Moreover, the routine height of curb stones is often used to extract edges that are highly uniform in height [20,24]. Candidate edge points are often detected as gathered clusters composed of many single points with large elevation gradients. As another contribution of this study, multiple points of continuous upward fluctuation are combined as a whole edge block without consideration of edge height.

In addition, boundary points are distributed near straight lines over a short distance [11]. Most road boundary vectorization is achieved by fitting straight lines to candidate edge points [18,20], which adapts for straight-line or large-radius mildly curved edges. However, setting curve boundaries requires prior knowledge to define an appropriate fitting model. Based on the approximate equidistant relationship between the road edge and scanner trajectory, this study transforms the process of edge point tracking onto a pseudo-mileage spacing map, to which a linear model can be applied.

The remainder of this paper is organized as follows. Section 2 details the proposed method. Section 3 presents and discusses the experimental results. Section 4 concludes the paper.

2 Methodology

In this study, we propose a three-part method to quickly extract road boundaries from MLS data based on the scanner ground track. First, the scanner track is estimated from raw data using point density and slope. Then, edge blocks composed of several continuously upward fluctuating points are searched from both outer sides of the estimated tracks. Finally, the spacing between the scanner trajectory and extracted edge blocks is depicted in a pseudo-spacing mileage map to describe trends in spacing over distance traveled. A simple linear model is deduced based on the proximity and collinearity of boundary feature points in the map to quickly detect and track boundary points. Figure 1 presents a simple flow chart of the proposed method.

Figure 1 Flowchart of the proposed road boundary extraction method.

Figure 1

Flowchart of the proposed road boundary extraction method.

2.1 Estimation of laser scanner ground tracks

This study estimated scanner ground tracks based on point density and slope in the three steps.

2.1.1 Extraction of road feature areas

As scan lines are perpendicular to vehicle trajectory, the scanner ground trajectory is located at the densest part of the point cloud with a slight slope [40]. Point density (JS) and slope (Jn) are calculated using the following equations:

(1) J S i = 1 2 k ( X i + k X i k ) 2 + ( Y i + k Y i k ) 2 + ( Z i + k Z i k ) 2 ,
(2) J n i = Z i + 2 Z i 2 ( X i + 2 X i 2 ) 2 + ( Y i + 2 Y i 2 ) 2 ,
where 2 k is the number of points counted in the neighborhood (at least 20, to weaken the effect of spacing variation because of missing points); and J n i is formed between the second point in front and second point behind the i-th point to reduce the impact of noise and small fluctuations. Then those points for which TY = {P ( J S i < median ( J S) & J n i < 10°)} can be taken as the road feature area.

2.1.2 Obtaining rough scanner tracks

Points high above a road with low slope may be extracted to TY because of environmental complexity. To remove them, we segmented TY by time interval Δt, based on GPS time T. Let m be the number of segments after TY is segmented. Elevation is analyzed for each segment TYi, where TYi = {TY((i −1) · Δt < Ti < i · Δt)}. Those points with the most concentrated elevation distribution in TYi are retained and marked as the i-th road feature area TYFi, TYFi = {TYi (ZP − ΔZ < Zi < ZP + ΔZ)}, where ZP represents the peak value in elevation histogram of TYFi and ∆Z (0.2 m is recommended) is the allowable elevation tolerance. Rough scanner tracks CG (red points in Figure 2) are obtained by averaging the coordinates of all points in each TYF, giving CG i = TYF i ( X , Y , Z ) ¯ .

Figure 2 The rough estimated scanner ground tracks.

Figure 2

The rough estimated scanner ground tracks.

In Figure 2, the rough estimated trajectory (centers of gravity of the high point-density areas horizontally distributed) swings left and right, which is not consistent with the driving rules. Therefore, a refining process is necessary.

2.1.3 Refinement of the estimated scanner ground trajectory

In Figure 2, colored strips indicate the road feature area TYF divided by time interval Δt, in which CG points (in red) swing as the point distribution varies when the vehicle travels forward. To further refine CGi, we fitted them to an isochromatic series based on T. The CG points are first returned to the original point data based on the nearest neighbor principle (Euclidean distance). Then, the corresponding scan lines number to the CG points are detected. A test region of scan line frequency is constructed between CGi and CGi−1 shown in black rectangle in Figure 3.

Figure 3 Test region to obtain the scan lines CGi belongs to.

Figure 3

Test region to obtain the scan lines CGi belongs to.

The test region is several times wider than the average point spacing and longer than the CG point spans. The value of T in test region has slightly change in points 1–4, 5–8, and 9–11, while sharp time difference occurs at points 4–5 and 8–9. The sharp difference indicates the approximate interval time between adjacent scan lines. Let it be tL. Then the scan line number CGi belongs to can be derived by:

(3) SNl i = T CG i T 1 t L + 1 2 ,
where the variable SNl i denotes the sequence number of the scanline CGi belongs, take the scan line of the first point of data set P as line 1.

All CGi points from multiple measurements of laser foot points corresponding to the same scanning angle, and their time T should meet the isochromatic sequence shown in equation (4).

(4) T CG i = t 1 + ( SNl i 1 ) t d ,
where t 1 and t d represent the reliable estimate of acquisition time of the refined ground track points on the first scan line, and the elapsed time of each scan line, respectively. The point whose acquisition time closest to the refined T CG i is taken as the estimated vehicle tracks which is expressed as NPH i.

2.2 Composition of edge blocks

The raised curb is often detected as quantitative and discrete candidate boundary points. This study combines continuous fluctuating points into edge blocks by searching forward and backward from NPHi. Figure 4 illustrates the process of building edge blocks in four consecutive steps.

Figure 4 The process of composition of edge blocks.

Figure 4

The process of composition of edge blocks.

As can be seen from this figure, in the first step, a moving window is created to detect upward fluctuating points (small squares in cyan in Figure 4) by height differences. Window size is expressed as the number of points contained within the window. The half width (BMWi) of the window can be determined by BMW i = C h / ( J s i sin θ ) , where Ch and θ denote the minimum height (0.08 m) and minimum slope (30°) of the boundary to be detected, respectively. The default values of these parameters are considered according to regulations on exposure height of curb stones in Chinese city streets. In addition, window width varies with J s i ; the larger the window, the fewer points it contains. The elevation difference H diff i before and after the center point of the window is calculated as:

(5) H diff i = i + 1 i + BMW i Z j i BMW i i 1 Z j .

The point at which H diff i Ch is the upward point. We only detected boundaries in the forward direction, that is, d i + BMW i d i BMW i > ε , where d i + BMW i , d i BMW i denote the distances of the last and the first point in the window from NPHi. Tolerant ε is set to −0.1 m (to allow return of 0.1 m). The search length in Figure 4 is set to 15 m, according to conventional road design width. Two point sets A1 and A2 are formed by forward and backward searches from NPHi in opposite directions. In the following steps, forward search from A1 is taken as an example.

The second step is to filter points by slope. We only consider the slope between central point i of the window and the next point i + 1, which is given by:

(6) Slope i = Z i + 1 Z i ( X i + 1 X i ) 2 + ( Y i + 1 Y i ) 2 .

Points with Slopei < tan(θ) are removed from the data. It is possible to have one or two continuous upward points that may not be near the road boundary. However, if continuous multiple points fluctuate upwards, they are more likely to belong to a boundary. We combined continuous upward fluctuation points into blocks in the third step. The consecutive number Bni must satisfy Bn i max ( η C h / J s i , 1 ) , where η is the correction coefficient determined by the ratio of the interquartile range to the total range of adjacent point spacing among 10 neighborhood points. The value of η depends on the rate of missing points; a value in the range of 0.7–1 is recommended.

In the last step, the elevation of NPHi, extracted in Section 2.1.3, is used to remove blocks whose starting points are higher than the road surface.The filter is given in equation (7) and shown in Figure 4 with cyan shadows. Let (Xi, Yi, Zi) represent the position of NPHi and (BSXj, BSYj, BSZj) denote the coordinates of the start points of the j-th block located on the NPHi scan line. The threshold value of Zth is taken from the greater of the two values of 0.1 m and 3% of the span according to the horizontal slope of the road (generally 1.5–2%).

(7) BSZ j Z i < Z th Z th = max { 0.1 , 3 % ( BSX j X i ) 2 + ( BSY j Y i ) 2 } .

As shown in Figure 4, the blue and green points are the first and rest points of blocks, respectively. After multiple constraints, there are few edge blocks on an NPHi scan line. If no edge blocks remain, the boundary point corresponding to NPHi is labeled as null. Otherwise, feature information of discovered blocks, including BSD i j , spacing from the starting point of the j-th block to PGi, and BHij, the elevation difference in the block is extracted for the boundary tracking process. These features are calculated as:

(8) BSD i j = ( BSX j X i ) 2 + ( BSY j Y i ) 2 + ( BSZ j Z i ) 2 BH i j = BEZ j BSZ j ,
where BEZj represents the elevation of the last point in the block.

2.3 Boundary tracking

Frequent lane switching is not allowed during scanning, which limits fluctuations in spacing between the road boundary and the scanner’s ground track. This relationship is reflected in a pseudo-mileage spacing map in Figure 5, in which NPHi points are represented as red circles. Blue and white rectangles denote the starting point of each block and generated candidate edge feature points, respectively. The gray line shows the scan lines located at NPHi. There may be more than one edge block extracted from NPHi. However, the first block in each scan line is highly likely to belong to the roadside when the road surface is relatively smooth. Points on the map are named candidate boundary feature points, corresponding to ( 1 i L i , BSD i j ) of each boundary point, where Li is the Euclidean distance between adjacent NPHi. There are three processes to present the boundary tracking algorithm.

Figure 5 The pseudo-mileage spacing map.

Figure 5

The pseudo-mileage spacing map.

Process 1: Extracting the initial feature segment of the road boundary

In most cases, yi varies little in a certain range of Δx. However, there is always a segment from the map longer than 5 m that accords with a robust linear regression model of 95% confidence. These segments are found in the dataset composed of the first block of each scan line and are labeled the initial road boundary feature segments (red line in Figure 6). Inside points satisfying regression conditions are identified as boundary feature points, and end points at both sides are identified as the last identification points (red points in Figure 6).

Figure 6 Hunting zones derived by Δx and α.

Figure 6

Hunting zones derived by Δx and α.

In Figure 6, the initial feature segment of the road boundary is approximately in a straight line. We use this line as the direction to establish a possible area containing nearby boundary feature points at the end of the last identified feature points. The vertex angle of the possible area changes with the distance (x) from the vertex (the last identified points), whose size is determined from Process 2.

Process 2: Predicting the possible location of the next boundary feature point

The vehicle keeps moving in its original direction because of inertia. We first evaluated the separation of the vehicle from the road boundary based on the tail of recognized edge feature points (not less than 3 m) where Δy = ymaxymin. If Δyryth, where yth is a set threshold, then the vehicle runs parallel to the road boundary and the possible location of the next boundary feature point is close to the last recognized yi. Otherwise, the separation of vehicle from road boundary is represented as a regressed linear model. Then, yi+1 of the next boundary feature point at xi+1 can be predicted as:

(9) y i + 1 = y i Δ y r y th y i + 1 = k l x i + 1 + k b Δ y r > y th ,
where y = k l x + kb represents the regressed straight line derived from the tail of the recognized boundary feature points. Let ψ be the angle of the regressed line from the positive x-axis, then k i = tan( ψ). The threshold y th is set at 0.25 m, giving a maximum deviation of 5° from the road edge. It is common for vehicles to travel parallel to the road boundary. Then, the prediction y i+1 = y i improves tracking speed.

Process 3: Tracking boundary feature points using a hunting zone

We tracked boundary feature points using a hunting zone (black in Figure 6) controlled by a fan-shaped center angle α. As shown in Figure 6, α is opened along the predicted direction, the vertex of which refers to the last recognized boundary feature point (the last green point). We recommend an empirical model of α expressed as α = (a exp(−bΔx)) + c, where, a, b, and c are constant coefficients controlling the radian-unit fan-shaped angle. The recommended values of (0.86, 0.76, 0.12) cause α to decrease with the passing of Δ x . In practice, the search area is converted into a hunting zone parallel to the y-axis direction and is deduced as follows:

(10) R = 0.2 Δ x 0.25 m R = 2 tan α 2 Δ x cos ψ 0 .25 m < Δ x < 15 m,
where [ y i + 1 R , y i + 1 + R ] constitutes the hunting zone. If the hunting zone does not include a point where x = x i+1, then the search continues for x = x i+2. If multiple points are found in the hunting zone, the point with the smallest angle to the search direction is selected. However, as the distance between the current point and the final identified boundary feature point (the last green point) increases, the reliability of tracking deteriorates. We set a maximum search length SL th of 15 m. Exceeding this length, the forward search process is interrupted and tracing in Processes 1–3 is re-implemented from the last identified boundary feature point. Figure 7 illustrates the segmentation of feature points in the tracking process.

Figure 7 Data segmentation in tracking process.

Figure 7

Data segmentation in tracking process.

The red line on the left side of Figure 7 is the initial edge feature segment recognized for the first time. It divides dataset B into two segments B2 and B1, both of which contain red line segments. Starting from the end points (in red), the two sides are tracked in opposite directions. The identified edge feature points tracked in opposite directions are marked in green. If no point is marked in SLth from the last identified point, the data are disconnected. Beginning with the last green identified point, a new search dataset B3 is constructed, on which Processes 1–3 are executed again. The initial boundary feature segment (red line on the right side of Figure 7) is found for the second time. B3 is divided into B4 and B5 segments for forward and backward search, respectively. Note that the overlapping search area B6 undergoes a bidirectional search, forward search in the first dataset B1 and reverse search in B5. This can reduce the probability of missing boundary points. Provided that the recording order of data points in B2 and B5 is reversed, the same search steps as those in B1 and B3 can be used.

2.4 Post-processing

Once the connection relations of the extracted boundary points on the scan line are determined, a rough vectorized boundary can be quickly obtained. The selection of boundary mathematical models and the extraction of all point clouds in the road are not addressed in this paper. The following sections focus on determining the connection between adjacent boundary points.

Length density Ld (m−1) of two boundary points is defined as the ratio of the number of points in their spanning region to their distance. Figure 8 illustrates the span cube based on two adjacent points. In this figure, the red points are extracted edge points and the green lines connect two adjacent boundary points, cb1, cb2 (set at 0.2 and 0.1) depict the searching width in the direction of heading or departing from the road, and ch1, ch2 in Figure 8(c) indicate the height range of the cube, where ch1 is higher than the tallest point and ch2 lower than the lowest point (both set at 0.1). The connector Con describes the connection between adjacent boundary points as:

(11) Con = 1 s i i + 1 2 m Con = 1 Ld < 1 5 A s & 2 m < & s i i + 1 20 m & φ i < 10 ° Con = 0 other ,
where A s and s i i + 1 refer to the average point spacing of raw data and the Euclidean distance between the current boundary point i and the next point i + 1, respectively. φ i is the deviation angle formed by the line direction obtained by least squares linear regression from five points before and after point i . When Con = 1, the point i is connected to the next point; if not, points are not connected.

Figure 8 Span cube based on two adjacent edge points. (a)–(c) represent the search cube in axonometric, top, and side view, respectively; (d) illustrates the length of the span cube.

Figure 8

Span cube based on two adjacent edge points. (a)–(c) represent the search cube in axonometric, top, and side view, respectively; (d) illustrates the length of the span cube.

3 Results

3.1 Data

Experimental data were collected with the SSW-IV vehicle laser modeling and measurement system developed by the Chinese Academy of Surveying and Mapping. The laser scanner is located at the rear of the vehicle on the right and the scanning plane is perpendicular to the direction of travel. Within seconds, five million measurements and 100 scan lines are collected in a typical measurement range of 1–500 m with an angular accuracy of 0.1 milliradians. Traveling at a speed of 30 km/h resulted in average spacing of 8 cm between adjacent scan lines, and 0.6 cm between two adjacent points right below the vehicle. In addition, the scan angle information is not included in the original .las file.

The datasets shown in Figure 1 include two types of road edge line: straight lines and curves. Two types of curbstones are featured, with right angle and S-shaped cross-sections. Dataset 1 (185.4 MB, 5.7 million points, 308 m long) displayed in Figure 9(a) is located in an urban area with multiple road lanes, high-rise buildings, trees, a green belt, poles, and wires. One side of the road edge is a straight line, and the other consists of broken lines with different tilt angles. Dataset 2 (279.4 MB, 4.8 million points, 300 m long) depicted in Figure 9(b) lies on a winding highway with towering mountains on one side and low-lying valleys on the other; it is surrounded by several trees, sidewalks, fences, and electric facilities. With S-shaped cross-sectional roadside stones on both sides, the edges curve with changing radii.

Figure 9 The test area rendered by elevation in (a) Dataset 1 and (b) Dataset 2.

Figure 9

The test area rendered by elevation in (a) Dataset 1 and (b) Dataset 2.

3.2 Estimation of laser scanner ground tracks

The algorithm was performed on three sample datasets. The ground track of the scanner was estimated from two datasets at time intervals of 0.05 s, with an average distance of 0.4–0.6 m. Figure 10 illustrates the differences between the two estimated scanner ground tracks and the actual scanner position (black points). Red and blue points indicate the horizontal position of NPH before and after refinement, respectively. As shown in Figure 10, points lie near the location of the vehicle in the road, and red dots are mixed with blue. After refinement, the estimated scanner trajectory (formed by blue points) is smoother. The results show that the deviation of NPH (blue points) from the true value is not affected by road curvature, but only to the roughness of the road surface. To verify the precision and accuracy of NPH, we compared horizontal deviation from NPH to the actual scanner location. Actual location of the scanner is acquired at a frequency several times that of NPH. We extracted scanner position data with the closest data acquisition time T.

Figure 10 Comparison of estimated scanner ground tracks with the real scanner position.

Figure 10

Comparison of estimated scanner ground tracks with the real scanner position.

The result shows that the deviation from the roughly estimated scanner ground tracks of Dataset 1 (such as Area A in Figure 10) to the actual trajectory is relatively similar in each scanning line, while the estimated tracks of the Dataset 2 (gravity centers of high-density areas) swinging around the actual ones. This phenomenon may be because of the greater roughness of the road surface of the Dataset 2, especially on the right side of Area B in Figure 10. However, the refined algorithm redistributes the estimated deviation from each scan line so that the refined scanner trajectory is very close to the actual one. In fact, the more scan lines involved in the refinement, the more stable and accurate the refined tracks are.

The estimated NPH shows strong uniformity with the actual scanner location, with a maximum value of 14.3 cm, mean deviation 2.1 cm and standard deviation 1.3 cm. For the proposed boundary tracing algorithm, the minimum width of the buffer is 40 cm, which is five times larger than the horizontal difference of the estimated points. Thus, NPH can be used for boundary tracking.

Generally speaking, the point density of road surface closer to the scanner is much higher than that of other road points. MLS systems acquire the complete points in the lane under the scanning vehicle, i.e., the core data of the trajectory estimation algorithm. These dense points of the lanes provide the basis of the correctness of the rough scanner tracks, and consequently, ensure the accuracy of the trajectory estimation algorithm.

3.3 Extraction of edge blocks

Only boundary points on the scan line of estimated tracks were extracted. Compared to extracting on all scanning lines, the number of sampling boundary points is reduced and processing efficiency is improved. Table 1 shows the integrity and the noise of the edge blocks extracted from two datasets.

Table 1

Results of road edge blocks detection

Data Road boundary NPH Edge blocks Road edge blocks Real edge blocks Missed detection (%) Noise in the road Noise outside the road
Dataset 1 Left 832 832 832 832 0 0 0
Right 611 529 531 0.38 57 25
Dataset 2 Left 841 839 804 805 0.12 20 5
Right 631 620 620 0 11 0

The extracted edge blocks in Dataset 1 are shown in Figure 11. The red points represent the starting point of edge blocks, and the green are other points contained in the block. Left of the road, 832 ground track points (blue) correspond to 832 edge blocks (red) arranged neatly along the roadside in Figure 11(c). Locating the test area in a clean urban area with no obstructions on this side may be the main reason for this result. On the right, many vehicles are parked on the roadside, scattering and fragmenting the extracted edge blocks. As Table 1 shows, most noisy points were caused by vehicle occlusion in the road; outside the road, noise is rare. Of the 813 edge blocks extracted on the right side of the road, only seven points correspond to three edge blocks; the rest correspond to two at most. At the end of the green belt, two low edge blocks were not extracted.

Figure 11 Extracted edge blocks in Dataset 1. (a) Location of the starting point of the extracted edge blocks; (b) and (c) are enlarged maps of red and green boxes, respectively.

Figure 11

Extracted edge blocks in Dataset 1. (a) Location of the starting point of the extracted edge blocks; (b) and (c) are enlarged maps of red and green boxes, respectively.

Figure 12 demonstrates the extraction of edge blocks in Dataset 2, which encountered the same vehicle occlusion as Dataset 1; most edge block noises were located within the road surface. Only five noise blocks were not in the road and each NPHi corresponds to no more than two blocks in a scan line. Defined edge blocks are very reliable in finding the raised edge and have strong resistance to noise.

Figure 12 Extracted edge blocks in the winding road of Dataset 2. (a) Shows the location of the starting point of the extracted edge blocks; (b) and (c) are enlarged maps of green and red boxes, respectively.

Figure 12

Extracted edge blocks in the winding road of Dataset 2. (a) Shows the location of the starting point of the extracted edge blocks; (b) and (c) are enlarged maps of green and red boxes, respectively.

Compared with other studies that extracted a greater number of candidate boundary points interspersed with noise [5,10,16,19], the proposed edge block detection method reduces noise as much as possible. In addition, even if more than one boundary point corresponds to NPHi, there is not much noise on the road surface. It is very likely that the first block extracted on each scan line belongs to the road edge. Each edge block occupies a certain length, leaving multiple edge blocks on a scan further apart than the hunting zone width used in the edge tracking algorithm. As a result, there are only two cases in boundary point tracking: there either is or is not a point within the zone, which makes boundary tracking efficient. Figure 13(b) and (d) shows the position of the starting point of the edge block. The extracted boundary points (red) accurately locate the road edge. The boundary points detected in Dataset 2 shown in Figure 13(d) are not as neat as those of the urban road in Figure 13(b). This may be because the gradient of S-shaped road curbstone constantly changes and tends to be gentle near the road surface.

Figure 13 Location of extracted boundary points displayed on axonometric and side view. (a) Axonometric view in Dataset 1. (b) Side view in Dataset 1. (c) Axonometric view in Dataset 2. (d) Side view in Dataset 2.

Figure 13

Location of extracted boundary points displayed on axonometric and side view. (a) Axonometric view in Dataset 1. (b) Side view in Dataset 1. (c) Axonometric view in Dataset 2. (d) Side view in Dataset 2.

With slope restriction, a defined boundary point has a slope greater than 30° to the next point. Alternatively, poor data quality, affected by noise and wetness of the road surface, may be responsible.

There is a common phenomenon of missing point clouds, which is reflected in sudden changes in spacing between adjacent points in Figure 13(d). On the side view shown in Figure 13(a), extracted boundary points are clustered together, whereas they are scattered in Figure 13(c). However, they remain the closest point to the boundary on this scanning line.

Except for the minimal contact area between the lower part of a wheel and the road, the outline points of vehicles have abrupt changes in elevation and distance. These changes are often geometrically backward, that is, the horizontal distance (d) from the scanner is reduced, which is not in line with the law of the defined characteristics of the boundary points. Only a very small number of edge blocks at the lower part of wheels were detected, and subsequently were ignored in boundary tracking process. Therefore, occlusions because of the parking cars on the roadside cause little trouble to the boundary points detection.

3.4 Boundary tracking

Table 2 lists the boundary points identified by the tracking algorithm. The completeness of the boundary tracking step is above 99.2%. The continuous boundary on the left side of the road in Dataset 1 was completely detected. In addition, disconnected boundaries (the left side of Dataset 2) and the branches (the right side of Datasets 1 and 2) result in the removal – during boundary tracking – of four, two, and two boundary points, respectively, as they do not conform to the main convergence direction of boundary points in the neighborhood.

Table 2

Tracing results of road boundary points

Experimental data Road boundary Number of road edge blocks Identified boundary points Missed boundary points in tracking Detection rate (%)
Dataset 1 Left 832 832 0 100
Right 529 525 4 99.2
Dataset 2 Left 804 802 2 99.7
Right 620 618 2 99.6

The road boundary tracking results of Dataset 1 on perspective are shown in Figure 14. In the figure, blue points denote NPH, and red points the starting point of the edge blocks. Tracked points are also shown in green. Although travel direction (in Figure 14(e)) is not consistent with the boundary, the spacing between the travel direction and the boundary can be traced linearly. There is one boundary point omitted in T1 as it is beyond the search length in both directions. Three points in T2 exceed the hunting zone because of a sudden turn in the branch. This may also occur at the end of the arc green belt and road forks, but the number of such omitted points is small. Ignoring the form of the boundary, boundary points in the red circle in Figure 14(c) are extracted; these are actually on the border rather than the curb. At Area 2, depicted in Figure 14(d), the boundary is broken into very short, widely spaced parts. The quality of this point cloud acquisition is insufficient for boundary extraction. Moreover, long tracking length results in poor reliability. However, this is a rare case. The disconnection overlap area has been searched twice from two directions to detect as lots of tracked points as possible, while controlling for the risk of omission and misjudgment.

Figure 14 Results of boundary tracking in Dataset 1. (a) Candidate boundary points and (b) boundary points recognized after tracking. (c), (d), and (e) are enlargement maps of Area 1, Area 2, and Area 3, respectively.

Figure 14

Results of boundary tracking in Dataset 1. (a) Candidate boundary points and (b) boundary points recognized after tracking. (c), (d), and (e) are enlargement maps of Area 1, Area 2, and Area 3, respectively.

In contrast, there is only a small amount of occlusion in Dataset 2. The vehicle trajectory is consistent with the road boundary in most cases, which guarantees reliability of the boundary tracking algorithm. Except for loss of the track at the top of the greenbelt similar to T3 in Figure 15(d), ideal boundary tracking is achieved using spacing between the vehicle and the road edge as a model parameter. The tracking algorithm is not dependent on the line shape of the road. To further verify the universality of the algorithm, a T-junction was selected for experiments. The results are shown in Figure 16.

Figure 15 Results of boundary tracking in Dataset 2. (a) Candidate edge points and (b) boundary points recognized after tracking; (c) and (d) are further enlargement maps of Area 4 and Area 5, respectively.

Figure 15

Results of boundary tracking in Dataset 2. (a) Candidate edge points and (b) boundary points recognized after tracking; (c) and (d) are further enlargement maps of Area 4 and Area 5, respectively.

Figure 16 Extraction of boundary points at a T-junction. (a) The inner bend and (b) the outer bend.

Figure 16

Extraction of boundary points at a T-junction. (a) The inner bend and (b) the outer bend.

Travel speed varies greatly when turning, resulting in different NPH spacing. However, the estimated scanner ground trajectory (blue points) in Figure 16 is still smooth, illustrating the universal reliability of the presented method for estimating scanner trajectory. The boundary points of the main road continue to track along the original direction (red points in the upper part of Figure 16(b)) when the vehicle turns to the branch. Boundary points on the lateral bend at the turning are quickly tracked because of aggregation of candidate boundary points. Both sides of the bend show great extraction. At the intersection of the road, where road boundaries are not continuous, the proposed method is more efficient and convenient than fitting road boundaries based on multiple models.

With short occlusion length, the correctness and integrity of boundary point extraction are guaranteed by the proximity and collinearity of boundary points on a pseudo-mileage spacing map. Rather than directly establishing a tracking model for boundary lines that may require many different models to fit different segments [21,41], the presented method uses simple linear models to track spacing between the road boundary and the vehicle. The proposed algorithm does not require the road boundary to be strictly parallel to the direction of travel of the vehicle, nor is it related to the line shape of the road boundary. Another advantage is the speed of the algorithm. The execution time is extremely short as it uses the original recording order. Comparatively speaking, calculation of the scanner’s ground trajectory takes longer than that of searching the road edge blocks and tracking the boundary points, as they only operate in scan lines located on PG. If vehicle trajectory data are available, the extraction process can be completed in real time.

4 Conclusion

This study estimated a scanner’s ground track based on the spatial distribution of point clouds. With the estimated track point and multiple constraints of neighborhood elevation difference, slope, and continuity of fluctuation points, accurate edge blocks with strong resistance to noise are defined. Spacing between edge blocks and the scanner track is transformed into a pseudo-mileage spacing map. As the spacing varies little over a short distance, the edge is easy to detect and track using a simple linear model. Experiments on noisy data accurately extracted the location of edge blocks separated far from one another on a scan line, which guarantees the reliability of this linear tracking model and improves tracking speed. Our results suggest that the proposed method is not dependent on whether the road boundary is parallel to the direction of traveling path of the vehicle, nor the line type of the road boundary. Road edge points can be extracted quickly, effectively, and accurately as long as normal data acquisition quality can be guaranteed. However, the tracking model may fail to recognize lateral branch boundaries with a small number of edge points. In future research, we will focus on improving.

Acknowledgments

Thanks are due to Chuanshuai Zhang for his data collection and technological help, and to Jun Wang for assistance with the experiments.

    Funding information: This study was supported by the National Natural Science Foundation of China, Grant Nos. 41890854; 41372330; 41671436; Natural Science Basic Research Plan in the Shaanxi Province of China, Grant No. 2021JQ-819.

    Author contributions: L. C. S. and M. Q. Z. conceptualized the model. J. F. Z. and M. Q. Z. deduced the method and designed the experiment. J. F. Z. and X. W. developed the model code. M. Q. Z. and J. M. K. performed the data collation and accuracy verification. L. C. S. and J. F. Z. prepared the manuscript with contributions from all co-authors.

    Conflict of interest: Authors state no conflict of interest.

References

[1] Yang B, Wei Z, Li Q, Li J. Automated extraction of street-scene objects from mobile lidar point clouds. Int J Remote Sens. 2012;33(18):5839–61. Search in Google Scholar

[2] Miyazaki R, Yamamoto M, Hanamoto E, Izumi H, Harada K. A line-based approach for precise extraction of road and curb region from mobile mapping data. ISPRS Ann Photogram Remote Sens Spat Inf Sci. 2014;II(5):243–50. Search in Google Scholar

[3] Wu B, Yu B, Huang C, Wu Q, Wu J. Automated extraction of ground surface along urban roads from mobile laser scanning point clouds. Remote Sens Lett. 2016;7(2):170–9. Search in Google Scholar

[4] Yang B, Dong Z, Liu Y, Liang F, Wang Y. Computing multiple aggregation levels and contextual features for road facilities recognition using mobile laser scanning data. ISPRS J Photogram Remote Sens. 2017;126:180–94. Search in Google Scholar

[5] Xu S, Wang R, Zheng H. Road curb extraction from mobile LiDAR point clouds. IEEE Trans Geosci Remote Sens. 2017;55(2):996–1009. Search in Google Scholar

[6] Kumar P, Mcelhinney CP, Lewis P, Mccarthy T. An automated algorithm for extracting road edges from terrestrial mobile LiDAR data. ISPRS J Photogram Remote Sens. 2013;85(11):44–55. Search in Google Scholar

[7] Ibrahim S, Lichti D. Curb-based street floor extraction from mobile terrestrial lidar point cloud. ISPRS – Int Arch Photogram Remote Sens Spat Inf Sci. 2012;39:193–8. Search in Google Scholar

[8] Zai D, Li J, Guo Y, Cheng M, Lin Y, Luo H, et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts. IEEE Trans Intell Transp Syst. 2018;19(3):802–13. Search in Google Scholar

[9] Guo J, Tsai M-J, Han J-Y. Automatic reconstruction of road surface features by using terrestrial mobile lidar. Autom Constr. 2015;58:165–75. Search in Google Scholar

[10] Zhang W, editor. LIDAR-based road and road-edge detection. 2010 IEEE Intelligent Vehicles Symposium, June 21–24, 2010. La Jolla, CA, USA: IEEE; 2010. p. 845–8. Search in Google Scholar

[11] Hervieu A, Soheilian B, editors. Road side detection and reconstruction using LIDAR sensor. 2013 IEEE Intelligent Vehicles Symposium (IV), June 23–26, 2013. Gold Coast, Australia: IEEE; 2013. p. 1247–52. Search in Google Scholar

[12] Sherif Ibrahim DL. Curb-based street floor extraction from mobile terrestrial lidar point cloud. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B5, 2012, XXII ISPRS Congress, 25 August–01 September 2012, Melbourne, Australia; 2012. p. 193–8 Search in Google Scholar

[13] Yadav M, Singh AK, Lohani B. Extraction of road surface from mobile LiDAR data of complex road environment. Int J Remote Sens. 2017;38(16):4655–82. Search in Google Scholar

[14] El-Halawany SI, Lichti DD. Detecting road poles from mobile terrestrial laser scanning data. GISci Remote Sens. 2013;50(6):704–22. Search in Google Scholar

[15] Yang B, Fang L, Li J. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds. ISPRS J Photogram Remote Sens. 2013;79:80–93. Search in Google Scholar

[16] Liu Z, Wang J, Liu D. A new curb detection method for unmanned ground vehicles using 2D sequential laser data. Sens (Basel). 2013;13(1):1102–20. Search in Google Scholar

[17] Jung J, Che E, Olsen MJ, Parrish C. Efficient and robust lane marking extraction from mobile lidar point clouds. ISPRS J Photogram Remote Sens. 2019;147:1–18. Search in Google Scholar

[18] Rodríguez Cuenca B, García Cortés S, Ordóñez, Galán C, Alonso MC. An approach to detect and delineate street curbs from MLS 3D point cloud data. Autom Constr. 2015;51:103–12. Search in Google Scholar

[19] Serna A, Marcotegui B. Urban accessibility diagnosis from mobile laser scanning data. ISPRS J Photogram Remote Sens. 2013;84:23–32. Search in Google Scholar

[20] Wijesoma WS, Kodagoda K, Balasuriya AP, editors. Road-boundary detection and tracking using ladar sensing. IEEE Transactions on Robotics and Automation. Lausanne, Switzerland: IEEE; 2004;20(3). p. 456–4. Search in Google Scholar

[21] Sun P, Zhao X, Xu Z, Wang R, Min H. A 3D LiDAR data-based dedicated road boundary detection algorithm for autonomous vehicles. IEEE Access. 2019;7:29623–38. Search in Google Scholar

[22] Boyko A, Funkhouser T. Extracting roads from dense point clouds in large scale urban environment. ISPRS J Photogram Remote Sens. 2011;66(6):S2–12. Search in Google Scholar

[23] Guan H, Li J, Yu Y, Ji Z, Wang C. Using mobile LiDAR data for rapidly updating road markings. IEEE Trans Intell Transp Syst. 2015;16(5):2457–66. Search in Google Scholar

[24] Zhang Y, Wang J, Wang X, Li C, Wang L. 3D LIDAR-based intersection recognition and road boundary detection method for unmanned ground vehicle. 2015 IEEE 18th International Conference on Intelligent Transportation Systems, September 15–18, 2015. Gran Canaria, Spain: IEEE; 2015. p. 499–504. Search in Google Scholar

[25] Yu Y, Li J, Guan H, Jia F, Wang C. Learning hierarchical features for automated extraction of road markings from 3-D Mobile LiDAR point clouds. IEEE J Sel Top Appl Earth Observ Remote Sens. 2015;8(2):709–26. Search in Google Scholar

[26] Xia S, Chen D, Wang R. A breakline-preserving ground interpolation method for MLS data. Remote Sens Lett. 2019;10(12):1201–10. Search in Google Scholar

[27] Lin Y, Wang C, Zhai D, Li W, Li J. Toward better boundary preserved supervoxel segmentation for 3D point clouds. ISPRS J Photogram Remote Sens. 2018;143(Sep):39–47. Search in Google Scholar

[28] Sha Z, Chen Y, Li W, Wang C, Nurunnabi A, Li J. A boundary-enhanced supervoxel method for extraction of road edges in MLS point clouds. ISPRS – Int Arch Photogram Remote Sens Spat Inf Sci. 2020;XLIII(B1-2020):65–71. Search in Google Scholar

[29] Xu Y, Ye Z, Yao W, Huang R, Stilla U. Classification of LiDAR point clouds using supervoxel-based detrended feature and perception-weighted graphical model. IEEE J Sel Top Appl Earth Observ Remote Sens. 2019;13:72–88. Search in Google Scholar

[30] Ibrahim S, Lichti D. Curb-based street floor extraction from mobile terrestrial lidar point cloud. Int Arch Photogram Remote Sens Spat Inf Sci. 2012;XXXIX(B5):193–8. Search in Google Scholar

[31] Han J, Kim D, Lee M, Sunwoo M. Enhanced road boundary and obstacle detection using a downward-looking LIDAR sensor. IEEE Trans Vehicular Technol. 2012;61(3):971–85. Search in Google Scholar

[32] Yang M, Wan Y, Liu X, Xu J, Wei Z, Chen M, et al. Laser data based automatic recognition and maintenance of road markings from MLS system. Opt Laser Technol. 2018;107:192–203. Search in Google Scholar

[33] Antonio Martín-Jiménez J, Zazo S, Arranz Justel JJ, Rodríguez-Gonzálvez P, González-Aguilera D. Road safety evaluation through automatic extraction of road horizontal alignments from mobile LiDAR System and inductive reasoning based on a decision tree. ISPRS J Photogram Remote Sens. 2018;146:334–46. Search in Google Scholar

[34] Yalcin O, Sayar A, Arar OF, Akpinar S, Kosunalp S. Approaches of road boundary and obstacle detection using LIDAR. IFAC Proc Volumes. 2013;46(25):211–5. Search in Google Scholar

[35] Wang H, Cai Z, Luo H, Cheng W, Li J, editors. Automatic road extraction from mobile laser scanning data. International Conference on Computer Vision in Remote Sensing, December 16–18, 2012. Xiamen, China: IEEE; 2013. p. 136–9. Search in Google Scholar

[36] Anttoni J, Juha H, Hannu H, Sensors KAJ. Retrieval algorithms for road surface modelling using laser-based mobile mapping. Sens (Basel). 2008;8(9):5238–49. Search in Google Scholar

[37] Wang H, Luo H, Wen C, Cheng J, Li P, Chen Y, et al. Road boundaries detection based on local normal saliency from mobile laser scanning data. IEEE Geosci Remote Sens Lett. 2015;12(10):2085–9. Search in Google Scholar

[38] Zhang Y, Wang J, Wang X, Dolan JM. Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor. IEEE Trans Intell Transport Syst. 2018;19(12):3981–91. Search in Google Scholar

[39] Guan H, Li J, Yu Y, Wang C, Chapman M, Yang B. Using mobile laser scanning data for automated extraction of road markings. ISPRS J Photogram Remote Sens. 2014;87:93–107. Search in Google Scholar

[40] Goulette F, Nashashibi F, Abuhadrous I, Ammoun S, Laurgeau C. An integrated on-board laser range sensing system for on-the-way city and road modelling. In Proceedings of the ISPRS Commission I Symposium “From Sensors to Imagery”, Paris, France; July 2006. Search in Google Scholar

[41] Wang X, Cai Y, Shi T, editors. Road edge detection based on improved RANSAC and 2D LIDAR Data. 2015 International Conference on Control, Automation and Information Sciences (ICCAIS), October 29–31, 2015. Changshu, China: IEEE. p. 191–6. Search in Google Scholar

Received: 2020-09-12
Revised: 2021-05-07
Accepted: 2021-05-19
Published Online: 2021-06-09

© 2021 Lichun Sui et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.