Pipeline¶
Pipelines are the operative construct in PDAL. PDAL constructs a pipeline to perform data translation operations using translate, for example. While specific applications are useful in many contexts, a pipeline provides some useful advantages for more complex things:
- You have a record of the operation(s) applied to the data
- You can construct a skeleton of an operation and substitute specific options (filenames, for example)
- You can construct complex operations using the JSON manipulation facilities of whatever language you want.
Note
pipeline is used to invoke pipeline operations via the command line.
Warning
As of PDAL 1.2, JSON is now the preferred specification language for PDAL pipelines. XML read support is still available at 1.2, but JSON is preferred. XML support will be dropped in a future release.
Introduction¶
A PDAL JSON object represents a processing pipeline.
A complete PDAL JSON data structure is always an object (in JSON terms). In PDAL JSON, an object consists of a collection of name/value pairs – also called members. For each member, the name is always a string. Member values are either a string, number, object, array or one of the literals: “true”, “false”, and “null”. An array consists of elements where each element is a value as described above.
Examples¶
A simple PDAL pipeline, inferring the appropriate drivers for the reader and writer from filenames, and able to be specified as a set of sequential steps:
{
"pipeline":[
"input.las",
{
"type":"crop",
"bounds":"([0,100],[0,100])"
},
"output.bpf"
]
}

A simple pipeline to convert LAS to BPF while only keeping points inside the box \([0 \leq x \leq 100, 0 \leq y \leq 100]\).
A more complex PDAL pipeline, that reprojects the stage tagged A1
, merges
the result with B
, and writes the merged output with the writers.p2g
plugin.:
{
"pipeline":[
{
"filename":"A.las",
"spatialreference":"EPSG:26916"
},
{
"type":"filters.reprojection",
"in_srs":"EPSG:26916",
"out_srs":"EPSG:4326",
"tag":"A2"
},
{
"filename":"B.las",
"tag":"B"
},
{
"type":"filters.merge",
"tag":"merged",
"inputs":[
"A2",
"B"
]
},
{
"type":"writers.p2g",
"filename":"output.tif"
}
]
}

A more complex pipeline that merges two inputs together but uses
filters.reprojection to transform the coordinate system of
file B.las
from UTM to Geographic.
Definitions¶
- JavaScript Object Notation (JSON), and the terms object, name, value, array, and number, are defined in IETF RTC 4627, at http://www.ietf.org/rfc/rfc4627.txt.
- The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this documention are to be interpreted as described in IETF RFC 2119, at http://www.ietf.org/rfc/rfc2119.txt.
Pipeline Objects¶
PDAL JSON pipelines always consist of a single object. This object (referred to as the PDAL JSON object below) represents a processing pipeline.
- The PDAL JSON object may have any number of members (name/value pairs).
- The PDAL JSON object must have a Pipeline Array.
Pipeline Array¶
- The pipeline array may have any number of string or Stage Objects elements.
- String elements shall be interpreted as filenames. PDAL will attempt to infer the proper driver from the file extension and position in the array. A writer stage will only be created if the string is the final element in the array.
Stage Objects¶
For more on PDAL stages and their options, check the PDAL documentation on Readers, Writers, and Filters.
- A stage object may have a member with the name
tag
whose value is a string. The purpose of the tag is to cross-reference this stage within other stages. Eachtag
must be unique. - A stage object may have a member with the name
inputs
whose value is an array of strings. Each element in the array is the tag of another stage to be set as input to the current stage. - Reader stages will disregard the
inputs
member. - If
inputs
is not specified for the first non-reader stage, all reader stages leading up to the current stage will be used as inputs. - If
inputs
is not specified for any subsequent non-reader stages, the previous stage in the array will be used as input. - A
tag
mentioned in another stage’sinputs
must have been previously defined in thepipeline
array. - A reader or writer stage object may have a member with the name
type
whose value is a string. Thetype
must specify a valid PDAL reader or writer name. - A filter stage object must have a member with the name
type
whose value is a string. Thetype
must specify a valid PDAL filter name. - A stage object may have additional members with names corresponding to stage-specific option names and their respective values.
Filename Globbing¶
- A filename may contain the wildcard character
*
to match any string of characters. This can be useful if working with multiple input files in a directory (e.g., merging all files).
Extended Examples¶
BPF to LAS¶
The following pipeline converts the input file from BPF to LAS, inferring both the reader and writer type, and setting a number of options on the writer stage.
{
"pipeline":[
"utm15.bpf",
{
"filename":"out2.las",
"scale_x":0.01,
"offset_x":311898.23,
"scale_y":0.01,
"offset_y":4703909.84,
"scale_z":0.01,
"offset_z":7.385474
}
]
}
Python HAG¶
In our next example, the reader and writer types are once again inferred. After reading the input file, the ferry filter is used to copy the Z dimension into a new height above ground (HAG) dimension. Next, the filters.programmable is used with a Python script to compute height above ground values by comparing the Z values to a surface model. These height above ground values are then written back into the Z dimension for further analysis.
See also
filters.height describes using a specific filter to do this job in more detail.
{
"pipeline":[
"autzen.las",
{
"type":"ferry",
"dimensions":"Z=HAG"
},
{
"type":"programmable",
"script":"hag.py",
"function":"filter",
"module":"anything"
},
"autzen-hag.las"
]
}
DTM¶
A common task is to create a digital terrain model (DTM) from the input point cloud. This pipeline infers the reader type, applies an approximate ground segmentation filter using filters.ground, and then creates the DTM using the writers.p2g with only the ground returns.
{
"pipeline":[
"autzen-full.las",
{
"type":"ground",
"approximate":true,
"max_window_size":33,
"slope":1.0,
"max_distance":2.5,
"initial_distance":0.15,
"cell_size":1.0,
"extract":true,
"classify":false
},
{
"type":"writers.p2g",
"filename":"autzen-surface.tif",
"output_type":"min",
"output_format":"tif",
"grid_dist_x":1.0,
"grid_dist_y":1.0
}
]
}
Decimate & Colorize¶
This example still infers the reader and writer types while applying options on both. The pipeline decimates the input LAS file by keeping every other point, and then colorizes the points using the provided raster image. The output is written as ASCII text.
{
"pipeline":[
{
"filename":"1.2-with-color.las",
"spatialreference":"EPSG:2993"
},
{
"type":"decimation",
"step":2,
"offset":1
},
{
"type":"colorization",
"raster":"autzen.tif",
"dimensions":"Red:1:1, Green:2:1, Blue:3:1"
},
{
"filename":"junk.txt",
"delimiter":",",
"write_header":false
}
]
}
Merge & Reproject¶
Our first example with multiple readers, this pipeline infers the reader types, and assigns spatial reference information to each. Next, the filters.merge merges points from all previous readers, and the filters.reprojection filter reprojects data to the specified output spatial reference system.
{
"pipeline":[
{
"filename":"1.2-with-color.las",
"spatialreference":"EPSG:2027"
},
{
"filename":"1.2-with-color.las",
"spatialreference":"EPSG:2027"
},
{
"type":"merge"
},
{
"type":"reprojection",
"out_srs":"EPSG:2028"
}
]
}
Globbed Inputs¶
Finally, we capture another merge pipeline demonstrating the ability to glob multiple input LAS files from a given directory.
{
"pipeline":[
"/path/to/data/\*.las",
{
"type":"merge"
},
"output.las"
]
}
See also
The PDAL source tree contains a number of example pipelines that are used for testing. You might find these inspiring. Go to https://github.com/PDAL/PDAL/tree/master/test/data/pipeline to find more.
API Considerations¶
A Pipeline is composed as an array of pdal::Stage
, with the
first stage at the beginning and the last at the end. There are two primary
building blocks in PDAL, pdal::Stage
and
pdal::PointView
. pdal::Reader
,
pdal::Writer
, and pdal::Filter
are all subclasses of
pdal::Stage
.
pdal::PointView
is the substrate that flows between stages in a
pipeline and transfers the actual data as it moves through the pipeline. A
pdal::PointView
contains a pdal::PointTablePtr
, which
itself contains a list of pdal::Dimension
objects that define the
actual channels that are stored in the pdal::PointView
.
PDAL provides four types of stages – pdal::Reader
,
pdal::Writer
, pdal::Filter
, and
pdal::MultiFilter
– with the latter being hardly used (just
filters.merge) at this point. A Reader is a producer of data, a Writer
is a consumer of data, and a Filter is an actor on data.
Note
As a C++ API consumer, you are generally not supposed to worry about the underlying
storage of the PointView, but there might be times when you simply just
“want the data.” In those situations, you can use the
pdal::PointView::getBytes()
method to stream out the raw storage.
Usage¶
While pipeline objects are manipulable through C++ objects, the other, more convenient way is through an JSON syntax. The JSON syntax mirrors the arrangement of the Pipeline, with options and auxiliary metadata added on a per-stage basis.
We have two use cases specifically in mind:
- a command-line application that reads an JSON file to allow a user to easily construct arbitrary writer pipelines, as opposed to having to build applications custom to individual needs with arbitrary options, filters, etc.
- a user can provide JSON for a reader pipeline, construct it via a simple call
to the PipelineManager API, and then use the
pdal::Stage::read()
function to perform the read and then do any processing of the points. This style of operation is very appropriate for using PDAL from within environments like Python where the focus is on just getting the points, as opposed to complex pipeline construction.
{
"pipeline":[
"/path/to/my/file/input.las",
"output.las"
]
}
Note
https://github.com/PDAL/PDAL/blob/master/test/data/pipeline/ contains test suite pipeline files that provide an excellent example of the currently possible operations.
Stage Types¶
pdal::Reader
, pdal::Writer
, and
pdal::Filter
are the C++ classes that define the stage types in
PDAL. Readers follow the pattern of readers.las or
readers.oci, Writers follow the pattern of writers.las or
readers.oci, with Filters using filters.reprojection or
filters.crop.
Note
Readers, Writers, and Filters contains a full listing of possible stages and descriptions of their options.
Note
Issuing the command pdal info --options
will list all available
stages and their options. See info for more.
Options¶
Options are the mechanism that PDAL uses to inform pdal::Stage
entities how to process data. The following example sorts the data using a
Morton ordering using filters.mortonorder and writes out a LASzip
file as the result. We use options to define the compression
function
for the writers.las pdal::Stage
.
{
"pipeline":[
"uncompressed.las",
{
"type":"filters.mortonorder"
}
{
"type":"writers.las",
"filename":"compressed.laz",
"compression":"true"
}
]
}