pyoints.examples package

Submodules

pyoints.examples.base_example module

In this example we learn the basics of point cloud processing using Pyoints.

We begin with loading the required modules.

>>> import os
>>> import numpy as np
>>> from pyoints import (
...     storage,
...     transformation,
...     IndexKD,
... )
>>> from pyoints.misc import print_rounded

Then we define input and output paths.

>>> inpath = os.path.join(
...                 os.path.dirname(os.path.abspath(__file__)), 'data')
>>> outpath = os.path.join(
...                 os.path.dirname(os.path.abspath(__file__)), 'output')

We select an input LAS point cloud.

>>> infile = os.path.join(inpath, 'forest.las')
>>> lasReader = storage.LasReader(infile)

We get the origin of the point cloud.

>>> print_rounded(lasReader.t.origin, 2)
[  364187.98  5509577.71       -1.58]
>>> print(lasReader.date.year)
2018

Then, we get the projection of the point cloud…

>>> print(lasReader.proj.proj4)
+proj=utm +zone=32 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs

… and the number of points.

>>> print(len(lasReader))
482981

We get the spatial extent in 3D.

>>> print_rounded(lasReader.extent, 2)
[  364187.98  5509577.71       -1.58   364194.95  5509584.71       28.78]

Now, we load the point cloud data from the disc. We receive an instance of GeoRecords, which is an extension of a numpy record array.

>>> las = lasReader.load()
>>> print(las.shape)
(482981,)
>>> print(las.date.year)
2018

We get some information on the attributes of the points…

>>> print(sorted(las.dtype.descr))
[('classification', '|u1'), ('coords', '<f8', (3,)), ('intensity', '|u1')]
>>> print_rounded(las[0:10].intensity)
[216 213 214 199 214 183 198 209 200 199]
>>> print_rounded(np.unique(las.classification))
[2 5]

… and take a look at the extent in 2D and 3D.

>>> print_rounded(las.extent(2), 2)
[  364187.98  5509577.71   364194.95  5509584.71]
>>> print_rounded(las.extent(3), 2)
[  364187.98  5509577.71       -1.58   364194.95  5509584.71       28.78]

Now we take a closer look at the spatial index. We begin with selecting all points close to the corners of the point cloud’s extent.

>>> radius = 1.0
>>> corners = las.extent().corners
>>> print_rounded(corners, 1)
[[  364188.   5509577.7       -1.6]
 [  364194.9  5509577.7       -1.6]
 [  364194.9  5509584.7       -1.6]
 ...,
 [  364194.9  5509584.7       28.8]
 [  364194.9  5509577.7       28.8]
 [  364188.   5509577.7       28.8]]

But before we select the points, we count the number of neighbors within the radius.

>>> count = las.indexKD().ball_count(radius, coords=corners)
>>> print_rounded(count)
[2502 1984 4027  475    0    0    0    0]

OK, now we actually select the points.

>>> n_ids = las.indexKD().ball(corners, radius)
>>> print(len(n_ids))
8

For each point we receive a list of indices. So we concatenate them to save the resulting subset as a point cloud.

>>> n_ids = np.concatenate(n_ids).astype(int)
>>> print(len(n_ids))
8988
>>> outfile = os.path.join(outpath, 'base_ball.las')
>>> storage.writeLas(las[n_ids], outfile)

We can also select the k nearest neighbors.

>>> dists, n_ids = las.indexKD().knn(corners, k=2)

We receive a matrix of distances and a matrix of indices.

>>> print_rounded(dists, 2)
[[ 0.3   0.3 ]
 [ 0.02  0.04]
 [ 0.49  0.49]
 [ 0.62  0.62]
 [ 3.95  3.97]
 [ 5.71  5.73]
 [ 1.26  1.27]
 [ 1.65  1.66]]
>>> print_rounded(n_ids)
[[     6  16742]
 [ 92767  92763]
 [320695 321128]
 [206255 206239]
 [440696 440687]
 [400070 400050]
 [400369 400340]
 [365239 365240]]

Again, we save the resulting subset.

>>> n_ids = n_ids.flatten()
>>> len(n_ids)
16
>>> outfile = os.path.join(outpath, 'base_knn.las')
>>> storage.writeLas(las[n_ids], outfile)

If we need to select points in the shape of an ellipsoid, we can also create a scaled spatial index. Doing so, each coordinate axis is scaled individually.

>>> T = transformation.s_matrix([1.5, 0.9, 0.5])
>>> indexKD = IndexKD(las.coords, T=T)

Again, we select neighboring points using the ball query. But here, we need to scale the input coordinates beforehand.

>>> s_corners = T.to_local(corners)
>>> print_rounded(s_corners, 1)
[[  546282.   4958619.9       -0.8]
 [  546292.4  4958619.9       -0.8]
 [  546292.4  4958626.2       -0.8]
 ...,
 [  546292.4  4958626.2       14.4]
 [  546292.4  4958619.9       14.4]
 [  546282.   4958619.9       14.4]]

Finally, we apply the query and save the subset.

>>> n_ids = indexKD.ball(s_corners, radius)
>>> n_ids = np.concatenate(n_ids).astype(int)
>>> print(len(n_ids))
8520
>>> outfile = os.path.join(outpath, 'base_ellipsoid.las')
>>> storage.writeLas(las[n_ids], outfile)

pyoints.examples.grid_example module

In this example we load a point cloud and convert it to rasters and voxels.

>>> import os
>>> import numpy as np
>>> from pyoints import (
...     storage,
...     projection,
...     transformation,
...     grid,
...     Grid,
...     filters,
...     Surface,
... )
>>> from pyoints.misc import print_rounded

First, we define input and output paths.

>>> inpath = os.path.join(
...                 os.path.dirname(os.path.abspath(__file__)), 'data')
>>> outpath = os.path.join(
...                 os.path.dirname(os.path.abspath(__file__)), 'output')

Then, we select an input LAS point cloud.

>>> infile = os.path.join(inpath, 'forest.las')
>>> lasReader = storage.LasReader(infile)
>>> las = lasReader.load()

Now, let’s convert the point cloud to a raster. All points within a raster cell are grouped to an individual point cloud.

>>> T = transformation.matrix(t=las.t.origin[:2], s=[1.0, 2.0])
>>> raster = grid.voxelize(las, T)

Let’s inspect the properties of the raster.

>>> print(raster.shape)
(4, 7)
>>> print(raster.dtype)
object
>>> print(len(raster[0, 0]))
11473
>>> print(sorted(raster[0, 0].dtype.descr))
[('classification', '|u1'), ('coords', '<f8', (3,)), ('intensity', '|u1')]
>>> print(sorted(raster[0, 0].dtype.names))
['classification', 'coords', 'intensity']
>>> print(raster[0, 0].dtype.type)
<class 'numpy.record'>
>>> print(raster[0, 0].dtype.flags)
16
>>> print(raster[0, 0].dtype.str)
|V26

We can save the points of specific cells individually.

>>> outfile = os.path.join(outpath, 'grid_cell.las')
>>> storage.writeLas(raster[2, 1], outfile)

We create a new raster, aggregating the point cloud data in a more specific manner.

>>> T = transformation.matrix(t=las.t.origin[:2], s=[0.3, 0.3])
>>> def aggregate_function(ids):
...     z = las.coords[ids, 2]
...     n = len(ids)
...     z_min = z.min() if n > 0 else 0
...     z_mean = z.mean() if n > 0 else 0
...     z_max = z.max() if n > 0 else 0
...     return (n, [z_min, z_mean, z_max])
>>> dtype=[('cell_count', int), ('z', float, 3)]
>>> raster = grid.voxelize(las, T, agg_func=aggregate_function, dtype=dtype)
>>> raster = Grid(las.proj, raster, T)
>>> print(raster.shape)
(24, 24)
>>> print(sorted(raster.dtype.names))
['cell_count', 'coords', 'z']

We save the fields as individual raster images.

>>> outfile = os.path.join(outpath, 'grid_count.tif')
>>> storage.writeRaster(raster, outfile, field='cell_count')
>>> outfile = os.path.join(outpath, 'grid_z.tif')
>>> storage.writeRaster(raster, outfile, field='z')

Test, if the projection has been set.

>>> handler = storage.RasterReader(outfile)
>>> raster.proj.wkt == handler.proj.wkt
True

Now, let’s create a three dimensional voxel space.

>>> T = transformation.matrix(t=las.t.origin, s=[0.4, 0.4, 0.5])
>>> def aggregate_function(ids):
...     intensity = las.intensity[ids]
...     n = len(ids)
...     intensity = intensity.mean() if n > 0 else 0
...     return (n, intensity)
>>> dtype=[('cell_count', int), ('intensity', int)]
>>> voxels = grid.voxelize(las, T, agg_func=aggregate_function, dtype=dtype)
>>> voxels = Grid(las.proj, voxels, T)
>>> print_rounded(voxels.shape)
(61, 18, 18)
>>> print(sorted(voxels.dtype.names))
['cell_count', 'coords', 'intensity']

Finally, we save only the non-empty voxel cells.

>>> outfile = os.path.join(outpath, 'grid_voxels.las')
>>> storage.writeLas(voxels[voxels.cell_count > 0], outfile)

pyoints.examples.stemfilter_example module

In this example, we try to detect stems in a forest using a point cloud of terrestrial laser scans. A couple of .las-files will be generated, which should be inspected with software like CloudCompare.

Let’s begin with loading the required modules.

>>> import os
>>> import numpy as np
>>> from pyoints.interpolate import KnnInterpolator
>>> from pyoints import (
...     storage,
...     grid,
...     transformation,
...     filters,
...     clustering,
...     classification,
...     vector,
...     GeoRecords,
...     interpolate,
... )
>>> from pyoints.misc import print_rounded

First, we define an input and an output path.

>>> inpath = os.path.join(
...                 os.path.dirname(os.path.abspath(__file__)), 'data')
>>> outpath = os.path.join(
...                 os.path.dirname(os.path.abspath(__file__)), 'output')

Thereafter, we load an input LAS point cloud.

>>> infile = os.path.join(inpath, 'forest.las')
>>> lasReader = storage.LasReader(infile)
>>> las = lasReader.load()
>>> print(len(las))
482981

The basic idea of the algorithm is to first derive a digital elevation model to calculate the height above ground. We simply rasterize the point cloud by deriving the lowest z-coordinate of each cell.

>>> T = transformation.matrix(t=las.t.origin[:2], s=[0.8, 0.8])
>>> def aggregate_function(ids):
...     return las.coords[ids, 2].min() if len(ids) > 0 else np.nan
>>> dtype = [('z', float)]
>>> dem_grid = grid.voxelize(las, T, agg_func=aggregate_function, dtype=dtype)
>>> print_rounded(dem_grid.shape)
(9, 9)

We save the DEM as a .tif-image.

>>> outfile = os.path.join(outpath, 'stemfilter_dem.tif')
>>> storage.writeRaster(dem_grid, outfile, field='z')

We create a surface interpolator.

>>> dem = KnnInterpolator(dem_grid.records().coords, dem_grid.records().z)

For the stem detection, we will focus on points with height above ground greater 0.5 m.

>>> height = las.coords[:, 2] - dem(las.coords)
>>> s_ids = np.where(height > 0.5)[0]
>>> print(len(s_ids))
251409

We filter the point cloud using a small filter radius. Only a subset of points with a point distance of at least 10 cm is kept.

>>> f_ids = list(filters.ball(las.indexKD(), 0.1, order=s_ids))
>>> las = las[f_ids]
>>> print(len(las))
11181
>>> outfile = os.path.join(outpath, 'stemfilter_ball_10.las')
>>> storage.writeLas(las, outfile)

We only keep points with a lot of neighbors to reduce noise.

>>> count = las.indexKD().ball_count(0.3)
>>> mask = count > 10
>>> las = las[mask]
>>> print(len(las))
8154
>>> outfile = os.path.join(outpath, 'stemfilter_denoised.las')
>>> storage.writeLas(las, outfile)

Now, we will filter with a radius of 1 m. This results in a point cloud with point distances of at least 1 m. Here, points associated with stems are arranged in straight lines.

>>> f_ids = list(filters.ball(las.indexKD(), 1.0))
>>> las = las[f_ids]
>>> print(len(las))
189
>>> outfile = os.path.join(outpath, 'stemfilter_ball_100.las')
>>> storage.writeLas(las, outfile)

For dense point clouds, the filtering technique results in point distances between 1 m and 2 m. Thus, we can assume that linear arranged points should have 2 to 3 neighboring points within a radius of 1.5 m.

>>> count = las.indexKD().ball_count(1.5)
>>> mask = np.all((count >= 2, count <= 3), axis=0)
>>> las = las[mask]
>>> print(len(las))
84
>>> outfile = os.path.join(outpath, 'stemfilter_linear.las')
>>> storage.writeLas(las, outfile)

Now, the stems are clearly visible in the point cloud. Thus, we can detect the stems by clustering the points.

>>> cluster_indices = clustering.dbscan(las.indexKD(), 2, epsilon=1.5)
>>> print(len(cluster_indices))
84
>>> print_rounded(np.unique(cluster_indices))
[-1  0  1  2  3  4  5]

In the next step, we remove small clusters and unassigned points.

>>> cluster_dict = classification.classes_to_dict(cluster_indices, min_size=5)
>>> cluster_indices = classification.dict_to_classes(cluster_dict, len(las))
>>> print_rounded(sorted(cluster_dict.keys()))
[0 1 3 5]

We add an additional field to the point cloud to store the tree number.

>>> las = las.add_fields([('tree_id', int)], data=[cluster_indices])
>>> outfile = os.path.join(outpath, 'stemfilter_stems.las')
>>> storage.writeLas(las, outfile)

Now, we can fit a vector to each stem. You should take a close look at the characteristics of the Vector object.

>>> stems = {}
>>> for tree_id in cluster_dict:
...     coords = las[cluster_dict[tree_id]].coords
...     stem = vector.Vector.from_coords(coords)
...     stems[tree_id] = stem

Finally, we determinate the tree root coordinates using the previously derived digital elevation model.

>>> roots = []
>>> for tree_id in stems:
...     stem = stems[tree_id]
...     coord = vector.vector_surface_intersection(stem, dem)
...     roots.append((tree_id, coord))
>>> dtype = [('tree_id', int), ('coords', float, 3)]
>>> roots = np.array(roots, dtype=dtype).view(np.recarray)
>>> roots = GeoRecords(las.proj, roots)
>>> outfile = os.path.join(outpath, 'stemfilter_roots.las')
>>> storage.writeLas(roots, outfile)

Module contents

Learn how to use Pyoints as a powerful tool.