csv-split can also stream the data associated with multiple ids, to specific sockets, named pipes or files. See
csv-split --help for more details about the semantics.
You can mix publishing to tcp sockets, local sockets, files, or pipes, if you need.
csv-split works as before (splitting data into files) if no streams are specified. Furthermore if there are streams assigned to some ids and none to
"..." , then the data relating to the remaining ids is discarded.
cv-calc roi and
draw operations can now read the shapes from csv files. To incorporate this, usual csv options have also been added to the shape attributes.
If the shapes have (reverse) index fields it will draw / apply all the shapes in the block in each file on single image, otherwise it takes only one shape per file.
If the shapes file has field t (timestamp), then the shapes in the block are drawn only to the image with corresponding timestamp, otherwise each block of shapes is drawn on the next available image (see cv-calc --help for details).
To try following examples, download this image.
name-value-apply is a thin wrapper around
name-value-convert --take-last functionality. It takes multiple configuration files in command line and takes only the value of last occurrence of each name in all the files.
This allow to consistently combine a bunch of configuration files; for example, you may have a default configuration file for your device, then a file with some settings customised, etc.
math-array utility in snark is a trivial wrapper for a range of numpy array operations. the main purpose of math-array is to easily run array operations on streams of data compatible with the csv-style utilities in comma and snark.
math-array does not attempt to substitute numpy functionality. If you need something customised, just write your own python code as usual.
Currently, it exposes three operations:
- (relatively) arbitrary numpy array operation
(Relatively) arbitrary numpy array operation
See math-array --help for more details.
Among all, csv-paste can number lines of its output. Now, individualised parameters have been added, if there are several instances of line-number in command line parameters. Examples:
As other comma utilities, all the operations csv-paste can operate on ascii or binary data. See csv-paste --help for more configuration possibilities.
csv-thin thins down high bandwidth data by a given rate.
A new option,
--period, allows you to specify the period of output, regardless of the rate of the input data (assuming that it's at least as fast as the desired output rate).
csv-paste for a high-rate input source you can try it with:
By default it uses wall-clock time for clocking the data. Alternately, and useful with pre-captured data, you can use a time field in the data:
Multiple rectangular regions can be specified in
roi operation of
cv-calc (like the
draw operation), so that:
- everything outside these regions in the input images is set to zero, or
- these regions are cropped out of input images into separate images (the arguments prefixed to input will be removed).
All images in the input stream must have same number of regions. Any region with zero width or height (e.g. 0,0,0,0) will be ignored and, if needed, can be used so that all images have same number of regions.
If all the bounding boxes for an image have zero area, then the whole image will be set to zero
To try following examples, download this image.
If you are putting together a training dataset for classification or object detection, you may need to create a uniformly distributed random selection of image crops from your image data.
The following pipeline helps you to do it. It picks random images, cuts 4 random patches of size 300x200 from each of them, and saves them as png files in the current directory.
(Note: index parameter in file=png,,index is required, because otherwise the filenames for the patches cut out of the same image would have different filenames.)
See cv-calc --help for more configuration options.
To recap: io-cat is a utility extending cat functionality towards merging live streams. io-cat semantics is the same as cat on files, but it can merge streams, too, e.g. merge three streams:
It supports a couple of simple merge policies: first come first serve by default, or round robin: e.g. try:
Now, io-cat also can wait for publishing servers to start, using io-cat --connect-attempts option, e.g:
See io-cat --help for more configuration options.
Last but not least, broadly, the right approach to persistent clients would be using a publish/subscribe middleware, of your liking. ZeroMQ is a light-weight choice (and comma zero-cat supports a core subset of it). However, if you just want to quickly cobble together simple merging of multiple streams, potentially from heterogeneous sources, io-cat is there for you.
control-speed utility sets the speed of each waypoint in the path based on its position in a curve.
turn operation calculates the angle at each waypoint with respect to its adjacent waypoints and assigns the speed according to given maximum lateral acceleration. By passing
control-speed can implement spot turn by outputting an extra waypoint with relative heading and no speed, for each sharp turn in the trajectory.
control-speed decelerate operation moderates the sudden decrease in speed in the trajectory by a given deceleration.
If you need to quickly deploy a bunch services for line-based or fixed-width data over TCP, local sockets, ZeroMQ, etc, now you can use io-topics, a utility in comma. You can deploy services that run continuously or start only in case if there is at least one client (e.g. if they are too resource greedy).
Perhaps, it is not a replacement for a more proper middleware like ROS or simply systemd, but the advantages of io-publish-topics are its light weight, ad-hoc nature, ability to run a mix of transport protocols.
Try the following toy example of io-topics publish:
You also can create - on the fly, if you want - a light-weight subscriber, as in example below. Run publishing as in the example above and then run io-topics cat:
If you would like to suspend your log playback (e.g. for demo purposes, when, e.g. visualising point cloud stream - or any kind of CSV data - or while browsing your data), now you could use csv-play --interactive or csv-play -i, pressing <whitespace> to pause and resume. Try to run the example below:
Press left or down arrow keys to output one record at a time. (Keys for outputting one block at a time: todo.)
Sometimes, one may need to repeat the same record, just as linux yes does. The problem with yes is that you cannot tell it to repeat at a given time interval.
Now, csv-repeat --ignore-eof can do it for you, which is useful for example, if you need to quickly fudge a sort of heartbeat stream, a simulated data stream, or alike:
Binary mode is supported as usual.
points-calc nearest-(min/max) and
percentile operations search within a given radius around each input point. This can take a lot of time for large amount of input data.
One way to speed things up is to, instead of finding the nearest min to each point in a given radius, find the minimum for the 27 voxels in the neighbourhood of the voxel containing the point. That computed value is assigned to each point in that voxel.
This optimization is used when
points-calc nearest-(min/max) or
points-calc percentile is given
--fast command line argument. For example
On large point cloud, like that of rose street (), optimized operations were found to be 20 times faster for extremums and more than 100 time faster for percentiles.
Assume you would like to quickly find additive changes in the scene. For example you have a static point cloud of empty car park, and would like to extract the parked cars from a stream of lidar data. If the extraction does not have to be perfect, a quick way of doing it would be using points-join --not-matching. A simple example:
The described car park scenario would look like:
The crude part is of course in choosing --radius value: it should be such that the spheres of a given radius around the subtrahend point cloud sufficiently overlap to capture all the points belonging to it. But then the points that are closer than the radius to the subtrahend point cloud will be filtered out, too. E.g. in the car park example above, the wheels of the cars will be chopped off at 10cm above the ground. To avoid this problem, you could for example erode somehow the subtrahend point cloud by the radius.
The described approach may be crude, but it is quick and suitable for many practical purposes.
Of course, for more sophisticated change detection in point clouds, which is more accurate and takes into account view points, occlusions, additions and deletions of objects in the scene, etc, you could use points-detect-change.