Seattle lidar

Download this project.

Seattle Lidar

Written by Peter Steinberg
Created: April 20, 2017
Last updated: August 5, 2021

Visualize Lidar Scattered Point Elevation Data

This notebook uses Datashader to visualize Lidar elevation data from the Puget Sound Lidar consortium, a source of Lidar data for the Puget Sound region of the state of Washington, U.S.A.

Lidar Elevation Data

Example X,Y,Z scattered point elevation data from the unpacked 7zip files (unpacked as .gnd files)

! head ../data/q47122d2101.gnd

The Seattle area example below loads 25 .gnd elevation files like the one above. We'll download, cache and read the data using intake.

In [1]:
import intake

cat = intake.open_catalog('./catalog.yml')
ModuleNotFoundError                       Traceback (most recent call last)
/tmp/ipykernel_6356/ in <module>
----> 1 import intake
      3 cat = intake.open_catalog('./catalog.yml')
      4 list(cat)

ModuleNotFoundError: No module named 'intake'
In [ ]:
lidar = cat.seattle_lidar()
df = lidar.to_dask()

Geographic Metadata

Since the data are geographic, we need to know the coordinate reference system (CRS) of the X and Y. All the geographic metadata is stored in the data source entry in the intake catalog.

In [ ]:
from pyproj.transformer import Transformer
from import CRS
In [ ]:
In [ ]:
transformer = Transformer.from_crs('epsg:2855','epsg:3857')  
# Washington State Plane North EPSG code and Mercator projection EPSG code   
In [ ]:
FT_2_M = 0.3048  

def convert_coords(df):
    lon, lat = transformer.transform(df.X.values * FT_2_M, df.Y.values * FT_2_M)
    df['meterswest'], df['metersnorth'] = lon, lat 
    return df[['meterswest', 'metersnorth', 'Z']]

Try out the convert_coords function on a subset of the data

In [ ]:

Convert the coordinates

Since our real dataset is large and partitioned using dask, we need to think about how to apply the convert_coords function to our data.

In [ ]:
import dask
import dask.distributed
import dask.delayed
In [ ]:

Explore the task graph to figure out a performant way to split up the coords conversion. First we'll try with using dask.delayed.

In [ ]:
dask.delayed(convert_coords)(df) \

We can see that even though we thought dask.delayed would help, in actuality we would be requiring all the processes to be done first and then the conversion would happen on the whole dataset in one go. Another approach would be to use dask.map_partitions to do the conversion on each piece of the data.

In [ ]:
df_merc = df.map_partitions(convert_coords)

Now that we have set up the task graph, we can use df_merc directly to do the computations on the fly. However, since this dataset fits in memory on my computer, I will do the computation and keep the output in memory for quick use when plotting.

NOTE: This next cell takes about a minute to run. Take a look at the dask dashboard at the location specified above to watch the task progression.

In [ ]:
%time dataset = df_merc.compute()

If all the data doesn't fit in memory on your machine, try downsampling the data from each file to only keep 1/100th of the total data. To avoid unnecessary computation, it is better to do the downsampling first and then convert the coordinates.

In [ ]:
small = df.sample(frac=0.01).map_partitions(convert_coords)
In [ ]:
%time small_dataset = small.compute()


In [ ]:
import geoviews as gv
import holoviews as hv
from holoviews import opts
from holoviews.operation.datashader import datashade, rasterize, rd


To simplify the exploration of the time required to display different data, define a function that accepts a regular pandas dataframe or a dask delayed dataframe.

In [ ]:
def plot(data, **kwargs):
    """Plot point elevation data, rasterizing by mean elevation"""
    points = hv.Points(data, kdims=['meterswest', 'metersnorth'], vdims=['Z'])
    image_opts = opts.Image(tools=['hover'], cmap='blues_r', colorbar=True,
                            width=800, height=800, xaxis=None, yaxis=None)
    return rasterize(points, aggregator=rd.mean('Z'), precompute=True, **kwargs).options(image_opts)

We'll also declare a tiler to use for background imagery.

In [ ]:
tiles = gv.tile_sources.EsriImagery()

Then we will construct the plot out of rasterized point data and tiles.

In [ ]:
tiles * plot(small_dataset)


In [ ]:
%time display(tiles * plot(small_dataset))
In [ ]:
%time display(tiles * plot(dataset))
In [ ]:
%time display(tiles * plot(df_merc))

Visualize Raster of Point Data

If we compute a raster of the point data then we don't need to store as much data in memory, which should allow faster interactions, and allow use with lots of other tools.

In [ ]:
%time raster = plot(dataset, dynamic=False, width=1000, height=1000).data

The result is an xarray.Dataset with x and y coordinates and a 2D array of Z values.

In [ ]:
raster.metersnorth[1] - raster.metersnorth[0]

With these data we can use the geo tools in datashader to compute and visualize the elevation using hillshading for instance. See Datashader User Guide #8 for more datashader and xrspatial geo tooling.

In [ ]:
from xrspatial import hillshade
In [ ]:
illuminated = hillshade(raster.get('meterswest_metersnorth Z'))
hv_shaded = hv.Image((raster.meterswest, raster.metersnorth, illuminated))
In [ ]:
tiles * rasterize(hv_shaded.options(opts.Image(cmap='blues', height=600, width=600)))

Ideas for future work

It'd be nice to have a rasterio writer from xarray so that we could easily write chunked geotiffs from xarray objects.

Something like:

raster.to_rasterio(path=None, mode='w', compute=True)
This web page was generated from a Jupyter notebook and not all interactivity will work on this website. Right click to download and run locally for full Python-backed interactivity.

Download this project.