PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
February 2017
95
Aerial Lidar Point Cloud Voxelization with its
3D Ground Filtering Application
Liying Wang, Yan Xu, and Yu Li
Abstract
Compared to raster grid, Triangulated Irregular Network
(TIN), and point cloud, the benefit of voxel representation
lies in that the implicit notion of adjacency and the true 3D
representation can be presented simultaneously. A binary
voxel-based data (BVD) model is proposed to reconstruct
aerial lidar point cloud and based on the constructed model
3D ground filtering (V3GF) is developed for separating ground
points from unground ones. The proposed V3GF algorithm
selects the lowest voxels with a value of 1 as ground seeds
and then labels them and their 3D connected set as ground
voxels. The ISPRS benchmark dataset are used to compare
the performance of V3GF with those of eight other publicized
filtering methods. Results indicate that the V3GF improves
on Axelsson’s performance on five samples in terms of total
error. The average Kappa coefficients for sites with relatively
flat urban areas, rough slope and discontinuous surfaces are
92.49 percent, 72.23 percent and 61.27 percent, respectively.
Introduction
Raster grid, Triangulated Irregular Network (
TIN
) and point cloud
are the commonly used methods to represent scattered light
detection and ranging (lidar) point cloud data. Raster grid and
TIN
representation are both 2.5 dimension (2.5D) and excellent
for modeling continuous surfaces such as typical bare earth,
but they cannot be used in 3D representation which requires
multiple elevation(or Z) values in relation to the same horizontal
coordinate (X and Y) (Stoker, 2009). Though point cloud repre-
sentation can be truly 3D, its topological and adjacent informa-
tion are difficult to use. To overcome these restrictions, a binary
voxel-based data (
BVD
) model is presented, which is a simpler 3D
representation and in which topological and adjacent relations
between voxels can be established much easier. The voxelized
representation can be obtained by dividing the entire scene vol-
ume into a 3D regular grid (the 3D sub-volumes, called voxels),
remapping the original data points to 3D voxels and assigning
the voxel value 1 or 0 when a voxel contains lidar points or not.
The use of voxels has its origins in medical imaging
(Wright
et al
., 1995). In medical applications, 3D volume
data are gathered from medical-imaging devices directly and
viewed as the raw data. In proposed approach, point clouds
are the raw data, and they must be transformed to 3D volume
data. During the transformation, the most important param-
eter to be determined is voxel resolution, which denotes the
size of a single voxel and is used for quantization of the 3D
space. Voxel-based lidar analyses are mainly used in forest
structure (Lefsky
et al.
, 1999; Weishampel
et al
., 2000; Riaño
et al
., 2003; Sinoquet
et al
., 2001; Popescu and Zhao, 2008;
Lee
et al
., 2004; Chasmer
et al
., 2004; Wang
et al
., 2008; Hosoi
et al
., 2013; Hagstrom, 2014). Among all these analyses, voxel
resolutions are predefined to fit their applications.
Based on the
BVD
model, 3D ground filtering(
V3GF
) algo-
rithm is designed for automatically detecting the ground.
During the past, various types of filtering methods have been
developed, within which morphological-based filter (Kilian
et al
.,1996; Zhang
et al
., 2003; Chen
et al
., 2007; Shen
et al
.,
2011; Cui
et al
., 2013; Pingel
et al
., 2013), interpolation-based
filter (Kraus and Pfeifer, 1998; Mongus and Žalik, 2012; Chen
et al
., 2013; Brovelli
et al
., 2002; Axelsson, 2000), slope-
based filter (Roggero, 2001; Sithole, 2001; Vosselman, 2000;
Shao and Chen, 2008; Yang
et al
., 2016) and segmentation/
cluster-based filter (Filin, 2002; Sithole, 2005; Hu
et al
., 2012;
Yan
et al
,. 2012; Lin and Zhang, 2014) rank among the most
popular approaches. The idea of the
V3GF
algorithm is com-
pletely different with existing algorithms, which is based on
3D connectivity constructure.
V3GF
selects the lowest voxels
with value 1 as ground seeds and then labels them and their
3D connected set as ground voxels. The advantage of the
presented method lies in making better use of connectivity
between voxels.
The organization of the paper is as follows. The proposed
BVD
model and
V3GF
algorithm is depicted in the next Section,
followed by a description of the lidar point cloud dataset used
in the test. Next, the results of the experiments are showed
and discussed, followed by an outlook and a summary.
Voxelization and Voxel-based 3D Ground Filtering
A flowchart of voxelization and
V3GF
algorithm are shown in
Figure 1. In lidar data, local outliers are often randomly dis-
tributed over a scene and originate from hits off objects like
birds, low-flying aircraft, errors in the laser range-finder, or
multi-path errors. Normally, these data points do not belong
to the scene and give rise to unreasonable elevation values.
If outliers are distributed over a scene, wrong terrain repre-
sentation will be leaded and the accuracy of the established
BVD
model will be influenced. Furthermore, the low outliers
with the elevation lower than that from ground points may
be taken for ground seeds, which will influence the filtering
accuracy. So outliers must be detected and removed firstly.
Then, a
BVD
model is obtained to reconstruct aerial lidar point
cloud by the voxelization process, where a voxel with value 1
or 0 implies that it contains lidar points or not. At last, a vox-
el-based 3D ground filtering (
V3GF
) algorithm is implemented
to distinguish ground voxels from non-ground ones.
Voxelization
Outliers Removal
The data space is broken down into horizontal slices of a
certain thickness, and then the number of points inside each
slice is counted and is visualized. Outliers are far above or
below the terrain surface and are less distributed, and there-
fore can be located in the lowest and highest tails from the
School of Geomatics Liaoning Technical University, 26 Mail
Box, 88 Yulong Road, Fuxin, Liaoning 123000, China (wangli-
).
Photogrammetric Engineering & Remote Sensing
Vol. 83, No. 2, February 2017, pp. 95–107.
0099-1112/17/95–107
© 2017 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.83.2.95