Blog from June, 2016

We just have added support of Velodyne Puck (VLP-16) to velodyne-to-csv utility.

Simply run velodyne-to-csv --puck with all other options specified as usual.

For example, suppose your VLP-16 publishes its data over UDP on port 2368.

Then you could get the individual points, e.g. as:

> udp-client 2368 --timestamp | velodyne-to-csv --puck | head

...or view data as:

> udp-client 2368 --timestamp | velodyne-to-csv --puck --fields x,y,z,id,scan | view-points --fields x,y,z,id,block

with the output like:

Of course, ASCII CSV is too slow, thus, as before, use binary data to keep up with Velodyne data in realtime,  e.g:

> udp-client 2368 --timestamp | velodyne-to-csv --puck --fields x,y,z,id,scan --binary | view-points --fields x,y,z,id,block --binary 3d,2ui

In your bash scripts, when you are inside a loop, do not declare local array variables.

Try to run the following script and observe time per iteration time grow linearly:

num=${1:-1000}  # How many iterations before report elapsed time
A=$(date +%s%N);    # Timestamp to nanoseconds
function do_something()
    while true;
        (( ++iteration ))
        sleep 0.001             # Pretend to do some work
        local my_array=( 1 )    # create a local array
        # Report elapsed time
        if (( ! ( iteration % num ) )) ;then
            B=$( date '+%s%N' )
            echo "$iteration $(( ($B - $A)/1000000 ))"   
            A=$(date +%s%N);


There is a problem with performance penalty in the line:

local my_array=( 1 )	# Array of one item

The line above creates an array for every single iteration. It happens to slow bash scripts down. On average, the reported duration above increases linearly (it is output for every 1000 iterations). This causes quadratic performance degradation with respect to the number of iterations. 

For example, a common trap is to declare a local array to store the array of PIPESTATUS array.

while true; do
	funcA | funcB | funcC
	local status=("${PIPESTATUS[@]}")	# For checking on all return codes in the above pipe

Corrected script:

declare -a status 		# Use 'local -s status' if inside a function
while true; do
	funcA | funcB | funcC

Exposing classes and functions defined in C++ libraries to Python is now possible in comma by creating C++/Python bindings with Boost.Python.


To illustrate this new capability, bindings for the C++ class format and its member function size() declared in csv/format.h have been defined in python/comma/cpp_bindings/csv.cpp:

// python/comma/cpp_bindings/csv.cpp
#include <boost/python.hpp>
#include <comma/csv/format.h>
    boost::python::class_< comma::csv::format >( "format", boost::python::init< const std::string& >() )
        .def( "size", &comma::csv::format::size );
	// add other csv bindings here

and added to cmake:

# fragment of python/comma/cpp_bindings/CMakeLists.txt

add_cpp_module( csv csv.cpp comma_csv )
# add other modules here

Build comma with BUILD_SHARED_LIBS=ON and BUILD_CPP_PYTHON_BINDINGS=ON, then open a python interpreter and enter the following commands:

>>> import comma.cpp_bindings.csv as csv
>>> f = csv.format('d,2ub,s[5]')
>>> f.size()

The function size() outputs binary size corresponding to the format string that was passed to the format object f on construction.

Under the hood

The bindings are placed inside a shared library and saved in the file, which is then installed in comma/cpp_bindings along with the other Python modules. On Ubuntu, it will usually be /usr/local/lib/python2.7/site-packages/comma/cpp_bindings/ Note that the name of the module used as a parameter for BOOST_PYTHON_MODULE macros has to match the name of the shared library declared in the cmake file, e.g. csv in the above example. 


The bindings are exposed as a shared library and hence one is limited to building comma with shared libraries or ensuring that all required static libraries have been compiled with -fPIC. Attempting to link with static libraries without position independent code may cause linking to fail or link with shared libraries instead.


view-points --pass-through

view-points has gained a new option --pass-through (or --pass for short) that allows it to become part of a processing pipeline.

The basic usage is:

$ cat data.csv | some-operation | view-points --pass | some-other-operation | view-points --pass > output.csv

or alternately:

$ cat data.csv | some-operation | view-points "-;pass" "otherdata.csv" | some-other-operation | view-points "-;pass" > output.csv

When multiple data sources are viewed only one can be given the pass option. pass will also disable --output-camera-config and the ability to output the point under mouse with double-right click.

For a more complete example try:

$ cat cube.bin | view-points "-;binary=3d;pass" \
      | csv-eval --fields=x,y,z --binary=3d "a = abs(x) < 0.2" | view-points "-;fields=x,y,z,id;binary=4d;pass" \
      | points-to-voxels --fields x,y,z --binary=4d --resolution=0.2 | view-points "-;fields=,,,x,y,z;binary=3i,3d,ui;weight=5"

using the attached cube.bin input file.

You should see three concurrent windows like this:

showing three stages of the processing pipeline.

A finite-state machine can be implemented in a few minutes on the command line or in a bash script using csv-join.

Assume we have the following state machine:


It has the following events and states:


  1. close
  2. open
  3. sensor closed
  4. sensor opened


  1. opened
  2. closing
  3. closed
  4. opening

The state transition table can be expressed in a csv file state-transition.csv:

# event,state,next_state
$ cat state-transition.csv

With the state transition table, csv-join can read in events, output the next state and keep track of this new state. Here is an example usage (input is marked '<', output '>'):

$ csv-join --fields event "state-transition.csv;fields=event,state,next_state" --string --initial-state "closed"
< open
> open,open,closed,opening
< sensor_opened
> sensor_opened,sensor_opened,opening,opened
< close
> close,close,opened,closing
< sensor_closed
> sensor_closed,sensor_closed,closing,closed
< open
> open,open,closed,opening
< close
> close,close,opening,closing
< sensor_closed
> sensor_closed,sensor_closed,closing,closed

The input field and joining key in this case is a single field event. As usual with csv-join any number of fields can be used to represent an event. The following example has the event represented by two fields: operation and result.

csv-join --fields operation,result "state-transition.csv;fields=operation,result,state,next_state" --string --initial-state 1

csv-join expects the state transition table to contain unique matches only (as per csv-join --unique).

The finite-state machine is only activated when the file/stream fields contain both 'state' and 'next_state'.

Related articles

 Finite-state machine

How-to articles

Add how-to article

How-to article

Provide step-by-step guidance for completing a task.

This blog is mostly driven by the ACFR software team. We plan to post on the new features that we continuously roll out in comma, snark, and other ACFR open source repositories (, and occasionally on more general software topics.