We just have added support of Velodyne Puck (VLP-16) to velodyne-to-csv utility.
Simply run velodyne-to-csv --puck with all other options specified as usual.
For example, suppose your VLP-16 publishes its data over UDP on port 2368.
Then you could get the individual points, e.g. as:
...or view data as:
with the output like:
Of course, ASCII CSV is too slow, thus, as before, use binary data to keep up with Velodyne data in realtime, e.g:
In your bash scripts, when you are inside a loop, do not declare local array variables.
Try to run the following script and observe time per iteration time grow linearly:
There is a problem with performance penalty in the line:
The line above creates an array for every single iteration. It happens to slow bash scripts down. On average, the reported duration above increases linearly (it is output for every 1000 iterations). This causes quadratic performance degradation with respect to the number of iterations.
For example, a common trap is to declare a local array to store the array of PIPESTATUS array.
Exposing classes and functions defined in C++ libraries to Python is now possible in comma by creating C++/Python bindings with Boost.Python.
To illustrate this new capability, bindings for the C++ class
format and its member function
size() declared in
csv/format.h have been defined in
and added to cmake:
Build comma with
BUILD_CPP_PYTHON_BINDINGS=ON, then open a python interpreter and enter the following commands:
size() outputs binary size corresponding to the format string that was passed to the format object
f on construction.
Under the hood
The bindings are placed inside a shared library and saved in the file
csv.so, which is then installed in
comma/cpp_bindings along with the other Python modules. On Ubuntu, it will usually be
/usr/local/lib/python2.7/site-packages/comma/cpp_bindings/csv.so. Note that the name of the module used as a parameter for
BOOST_PYTHON_MODULE macros has to match the name of the shared library declared in the cmake file, e.g.
csv in the above example.
The bindings are exposed as a shared library and hence one is limited to building comma with shared libraries or ensuring that all required static libraries have been compiled with -fPIC. Attempting to link with static libraries without position independent code may cause linking to fail or link with shared libraries instead.
view-points has gained a new option --pass-through (or --pass for short) that allows it to become part of a processing pipeline.
The basic usage is:
When multiple data sources are viewed only one can be given the pass option. pass will also disable --output-camera-config and the ability to output the point under mouse with double-right click.
For a more complete example try:
using the attached cube.bin input file.
You should see three concurrent windows like this:
showing three stages of the processing pipeline.
A finite-state machine can be implemented in a few minutes on the command line or in a bash script using
Assume we have the following state machine:
It has the following events and states:
- sensor closed
- sensor opened
The state transition table can be expressed in a csv file state-transition.csv:
With the state transition table,
csv-join can read in events, output the next state and keep track of this new state. Here is an example usage (input is marked '<', output '>'):
The input field and joining key in this case is a single field
event. As usual with csv-join any number of fields can be used to represent an event. The following example has the event represented by two fields: operation and result.
csv-join expects the state transition table to contain unique matches only (as per csv-join --unique).
The finite-state machine is only activated when the file/stream fields contain both 'state' and 'next_state'.
This blog is mostly driven by the ACFR software team. We plan to post on the new features that we continuously roll out in comma, snark, and other ACFR open source repositories (https://github.com/acfr), and occasionally on more general software topics.