Skip to end of metadata
Go to start of metadata

control-speed utility sets the speed of each waypoint in the path based on its position in a curve.

turn operation calculates the angle at each waypoint with respect to its adjacent waypoints and assigns the speed according to given maximum lateral acceleration. By passing --stop-on-sharp-turn or --pivot, control-speed can implement spot turn by outputting an extra waypoint with relative heading and no speed, for each sharp turn in the trajectory.

$ ( echo '0.0,0.0'; echo '0.3,0.3'; echo '0.6,0.6'; echo '0.6,0.9'; echo '0.6,1.2'; echo '0.9,1.2'; echo '1.2,1.2'; echo '1.5,0.9'; echo '1.8,0.6' ) > trajectory.csv

# moderate speed
$ control-speed turn --max-acceleration=0.5 --approach-speed=0.2 --fields=x,y --speed=1 < trajectory.csv > speed-turn.csv

# stop on sharp turns
control-speed turn --max-acceleration=0.5 --approach-speed=0.2 --fields=x,y --speed=1 --pivot < trajectory.csv > speed-pivot.csv

# visualise with trajectory as blue and speed as z axis in yellow
$ view-points "trajectory.csv;fields=x,y;shape=lines;title=trajectory" <( echo 0,0,begin )";fields=x,y,label;weight=8;color=red;title=origin" "speed-pivot.csv;fields=x,y,z;shape=lines;color=yellow;title=turn"

 

control-speed decelerate operation moderates the sudden decrease in speed in the trajectory by a given deceleration.

$ control-speed decelerate --fields=x,y,speed --deceleration=0.5 < speed-pivot.csv > speed-decelerate.csv

# visualise with speed as z-axis and orange color as the decelerated speed
$ view-points "trajectory.csv;fields=x,y;shape=lines;title=trajectory" <( echo 0,0,begin )";fields=x,y,label;weight=8;color=red;title=origin" \
    "speed-pivot.csv;fields=x,y,z;shape=lines;color=yellow;title=turn" "speed-decelerate.csv;fields=x,y,z;shape=lines;color=orange;title=decelerate"

If you need to quickly deploy a bunch services for line-based or fixed-width data over TCP, local sockets, ZeroMQ, etc, now you can use io-topics, a utility in comma. You can deploy services that run continuously or start only in case if there is at least one client (e.g. if they are too resource greedy).

Perhaps, it is not a replacement for a more proper middleware like ROS or simply systemd, but the advantages of io-publish-topics are its light weight, ad-hoc nature, ability to run a mix of transport protocols.

Try the following toy example of io-topics publish:

> # run publisher with topics a and b, with b on demand
> io-topics publish --config <( echo "a/command=csv-paste line-number"; echo "a/port=8888"; echo "b/command=csv-paste line-number"; echo "b/port=9999"; echo "b/on_demand=1" )
io-topics: publish: will run 'comma_execute_and_wait --group' with commands:
io-topics: publish:    io-publish tcp:8888   -- csv-paste line-number
io-topics: publish:    io-publish tcp:9999  --on-demand -- csv-paste line-number
    
> # in a different shell, observe that topic a keeps running even if no-one is listening,
> # whereas topic b runs only if at least one client is connected:
> socat tcp:localhost:8888 | head -n5 # will output something like, since the service keeps running even if there are no clients connected:
16648534
16648535
16648536
16648537
16648538
        
> socat tcp:localhost:9999 - | head -n5 # whenever the first client connects, will start from 0, since it runs only if at least one client is connected
0
1
2
3
4

You also can create - on the fly, if you want - a light-weight subscriber, as in example below. Run publishing as in the example above and then run io-topics cat:

> io-topics cat --config <( echo "a/command=head -n5 > a.csv"; echo "a/address=tcp:localhost:8888"; echo "b/command=head -n5 > b.csv"; echo "b/address=tcp:localhost:9999" )
io-topics: cat: will run 'comma_execute_and_wait --group' with commands:
io-topics: cat:     bash -c io-cat tcp:localhost:8888   | head -n5 > a.csv
io-topics: cat:     bash -c io-cat tcp:localhost:9999   | head -n5 > b.csv
> # check output            
> cat a.csv 
203740462
203740463
203740464
203740465
203740466
> cat b.csv 
0
1
2
3
4

If you would like to suspend your log playback (e.g. for demo purposes, when, e.g. visualising point cloud stream - or any kind of CSV data - or while browsing your data), now you could use csv-play --interactive or csv-play -i, pressing <whitespace> to pause and resume. Try to run the example below:

> echo 0 | csv-repeat --period 0.1 --yes | csv-time-stamp | csv-play --interactive
csv-play: running in interactive mode; press <whitespace> to pause or resume
20180503T032156.234658,0
20180503T032156.334336,0
20180503T032156.434497,0
20180503T032156.534721,0
20180503T032156.635077,0
20180503T032156.735428,0
20180503T032156.835511,0
20180503T032156.935653,0
20180503T032157.035926,0
csv-play: paused
csv-play: resumed
20180503T032157.136239,0
20180503T032157.236530,0

Press left or down arrow keys to output one record at a time. (Keys for outputting one block at a time: todo.)

Sometimes, one may need to repeat the same record, just as linux yes does. The problem with yes is that you cannot tell it to repeat at a given time interval.

Now, csv-repeat --ignore-eof can do it for you, which is useful for example, if you need to quickly fudge a sort of heartbeat stream, a simulated data stream, or alike:

> echo hello | csv-repeat --period 0.1 --ignore-eof | head -n5
hello
hello
hello
hello
hello
> echo hello | csv-repeat --period 0.1 --ignore-eof | csv-time-stamp | head -n5
20180420T034741.498771,hello
20180420T034741.600020,hello
20180420T034741.700202,hello
20180420T034741.800367,hello
20180420T034741.900539,hello

Binary mode is supported as usual.

points-calc nearest-(min/max) and percentile operations search within a given radius around each input point. This can take a lot of time for large amount of input data.

One way to speed things up is to, instead of finding the nearest min to each point in a given radius, find the minimum for the 27 voxels in the neighbourhood of the voxel containing the point. That computed value is assigned to each point in that voxel.

This optimization is used when points-calc nearest-(min/max) or points-calc percentile is given --fast command line argument. For example

> points-calc nearest-min --full --fast --fields x,y,scalar --radius 1
> points-calc percentile --percentile=0.03 --fast --fields x,y,scalar --radius 1

On large point cloud, like that of rose street (http://perception.acfr.usyd.edu.au/data/samples/riegl/rose.st/rose.st.*.csv.gz ), optimized operations were found to be 20 times faster for extremums and more than 100 time faster for percentiles.

Assume you would like to quickly find additive changes in the scene. For example you have a static point cloud of empty car park, and would like to extract the parked cars from a stream of lidar data. If the extraction does not have to be perfect, a quick way of doing it would be using points-join --not-matching. A simple example:

> # make sample point clouds
> for i in {20..30}; do for j in {0..50}; do for k in {0..50}; do echo $i,$j,$k; done; done; done > minuend.csv
> for i in {0..50}; do for j in {20..30}; do for k in {20..30}; do echo $i,$j,$k; done; done; done > subtrahend.csv
> cat minuend.csv | points-join subtrahend.csv --radius 0.51 --not-matching | view-points "minuend.csv;colour=red;hide" "subtrahend.csv;colour=yellow;hide" "-;colour=white;title=difference"

The described car park scenario would look like:

> cat carpark-with-cars.csv | points-join --fields x,y,z "empty-carpark.csv;fields=x,y,z" --radius 0.1 --not-matching > cars-only.csv

The crude part is of course in choosing --radius value: it should be such that the spheres of a given radius around the subtrahend point cloud sufficiently overlap to capture all the points belonging to it. But then the points that are closer than the radius to the subtrahend point cloud will be filtered out, too. E.g. in the car park example above, the wheels of the cars will be chopped off at 10cm above the ground. To avoid this problem, you could for example erode somehow the subtrahend point cloud by the radius.

The described approach may be crude, but it is quick and suitable for many practical purposes.

Of course, for more sophisticated change detection in point clouds, which is more accurate and takes into account view points, occlusions, additions and deletions of objects in the scene, etc, you could use points-detect-change.

Assume that you happened to know the coordinates of your sensor in some Cartesian coordinate system, and you want to derive the coordinates of your robot centre. For the robot configuration you know the offset of the sensor from the robot centre, but not the other way around. The solution:

get inverse offset
# assume this is the offset of the sensor from the robot centre
offset="1.3,-0.6,0.3,-0.5,0.6,-0.7"

inversed_offset=$( echo "0,0,0,0,0,0" | points-frame --to="$offset" --fields="x,y,z,roll,pitch,yaw" --precision=16 )

Now inversed_offset is the position (and pose) of the robot centre in the coordinate system associated with the sensor.

Step by step demo:

  • Start with some coordinates (navigation data in the world frame; the specific coordinate system does not matter). A sample data file is attached to this page:

    get nav
    cat nav.bin | csv-from-bin t,6d | head -n 2
    20101209T032242.279632,6248546.875440197,332966.0202201727,-37.76910082437098,-0.02045280858874321,-0.01195328310132027,2.113722085952759
    20101209T032242.280425,6248546.875484069,332966.020129519,-37.76909669768065,-0.02046919427812099,-0.0119620431214571,2.113716840744019

    The example uses binary, but this is up to you. The nav.bin file contains the trajectory of the robot centre (GPS unit) in the world frame.

  • Get the coordinates of the sensor in the world frame:

    get sensor trajectory
    cat nav.bin | csv-paste "-;binary=t,6d" "value=$offset;binary=6d" \
        | points-frame --from --fields=",frame,x,y,z,roll,pitch,yaw" --binary="t,6d,6d" \
        | csv-shuffle --fields="t,,,,,,,,,,,,,x,y,z,roll,pitch,yaw" --binary="t,6d,6d,6d" --output-fields="t,x,y,z,roll,pitch,yaw" > sensor.bin

    Now sensor.bin is the trajectory of the sensor in the world frame. We want to get the trajectory of the robot centre from these data.

  • Just do it:

    get centre coordinates back
    cat sensor.bin | csv-paste "-;binary=t,6d" "value=$inversed_offset;binary=6d" \
        | points-frame --from --fields=",frame,x,y,z,roll,pitch,yaw" --binary="t,6d,6d" \
        | csv-shuffle --fields="t,,,,,,,,,,,,,x,y,z,roll,pitch,yaw" --binary="t,6d,6d,6d" --output-fields="t,x,y,z,roll,pitch,yaw" > restored.bin

    Note the use of inversed_offset.

  • Verify by comparing to the original nav.bin:

    verify
    cat nav.bin \
        | csv-paste "-;binary=t,6d" "restored.bin;binary=t,6d" \
        | csv-eval --full-xpath --binary="t,6d,t,6d" --fields="f/t,f/x,f/y,f/z,f/roll,f/pitch,f/yaw,s/t,s/x,s/y,s/z,s/roll,s/pitch,s/yaw" \
            "dx = abs(f_x - s_x); dy = abs(f_y - s_y); dz = abs(f_z - s_z); droll = abs(f_roll - s_roll); dpitch = abs(f_pitch - s_pitch); dyaw = abs(f_yaw - s_yaw);" \
        | csv-shuffle --fields=",,,,,,,,,,,,,,dx,dy,dz,droll,dpitch,dyaw" --binary="t,6d,t,6d,6d" --output-fields="dx,dy,dz,droll,dpitch,dyaw" \
        | csv-calc --fields="dx,dy,dz,droll,dpitch,dyaw" --binary="6d" mean \
        | csv-from-bin 6d

    The output is on the order of \( 10^{-16} \) . The precision is defined by the accuracy of inversed_offset calculations above. If the --precision=16 option were not given, the comparison would be valid up to \( 10^{-12} \) or so.

Rabbit MQ

Introduction

Rabbit MQ is an open source message queue service (https://www.rabbitmq.com/).

It implements AMQP 0-9-1 (https://www.rabbitmq.com/tutorials/amqp-concepts.html).

programming tutorials: https://www.rabbitmq.com/getstarted.html

Installation

For Ubuntu:

 

sudo apt-get install rabbitmq-server
# check service is installed and running
service rabbitmq-server status
# for python clients
sudo pip install pika

 

(for other platforms see installation from the website)

rabbit-cat

rabbit-cat is a light rabbit MQ client in python available in comma.

see rabbit-cat -h for examples

example 1

For receiver run:

 

rabbit-cat listen localhost --queue="queue1"

 

For sender in a separate terminal:

 

echo "hello world!" | rabbit-cat send localhost --queue="queue1" --routing-key="queue1"

 

 

Suppose, the GPS unit on a vehicle is offset from the vehicle geometrical centre. Therefore, you most likely need to convert the GPS trajectory as 6DOF points (x,y,z,roll,pitch,yaw) to the trajectory of the vehicle centre.

The other (almost identical) use case: you have got a trajectory from Visual SLAM relative to a lidar and now want to convert it into the vehicle centre trajectory.

Now, it can be done as following:

> gps_unit_offset=1,2,3,0.1,0.2,0.3
> cat gps_unit_trajectory.csv | points-frame --position $gps_unit_offset --fields frame

In the past, in such a situation, one would need to jump through the hoops with points-frame as following:

> cat gps_unit_trajectory.csv | csv-paste - value=$gps_unit_offset | points-frame --fields frame,position | csv-shuffle --fields ,,,,,,,,,,,,x,y,z,roll,pitch,yaw --output-fields x,y,z,roll,pitch,yaw

When joining two point clouds, if you would like to output a few nearest points, now you can use points-join with --size option:

> # single nearest point (same as before):
> echo 0,0,0 | points-join <( echo 0,0,1; echo 0,0,2; echo 0,0,3 ) --radius 5
0,0,0,0,0,1
> # up to a given number of nearest points:
> echo 0,0,0 | points-join <( echo 0,0,1; echo 0,0,2; echo 0,0,3 ) --radius 5 --size 2
0,0,0,0,0,1
0,0,0,0,0,2

Suppose, you have two point clouds cloud 1 and cloud 2. Suppose, for each point P from cloud 1 you would like to get all the points from cloud 2 that are not farther then a given radius from P

Then, you could use points-join --all

> cat cloud-1.csv | points-join cloud-2.csv --radius 1.5 --all

Now, you also can specify variable radius for points in cloud 1. (E.g. your radius may vary depending on your point cloud density or structure, as it happened in our use case.)

Then you could run:

> cat cloud-1.csv | points-join --fields x,y,z,radius cloud-2.csv --radius 1.5 --all

(Note that, as a limitation, the point-specific radius should not exceed --radius value.)

Sometimes, when you run some slow processing of a point cloud and output the result in a file, you may want to monitor the progress. Then, the following trick may help you:

> some_slow_processing_script > points.csv &
> ( i=0; while true; do cat points.csv | csv-paste value=$i -; (( ++i )); done; sleep 30 ) | view-points --fields block,,,,x,y,z

Suppose, you need to go through a dataset to pick images for your classification training data.

cv-cat view can be used for basic browsing, selecting images, and assigning a numeric label to them:

cat images.bin | cv-cat "view=0,,png"

The command above will show the image and wait a key press:

Press whitespace to save the file as <timestamp>.png, e.g. 20170101T123456.222222.png

Press numerical keys 0-9: save the file as <timestamp>.<num>.png, e.g. if you press 5: 20170101T123456.222222.5.png

Press <Esc> to exit.

Press any other key to show the next frame without saving.

 

view parameters have the following meaning:

The first parameter is the wait in milliseconds for key press, 0 to wait indefinitely.

The second parameter is the window title (irrelevant for labelling).

The third parameter is the image extension e.g. png, jpg ...; default ppm.

 

A few new features have been added to cv-cat accumulate filter.

Before, it accumulated images as sliding window of a given size. Now, you could also ask for fixed layout of the accumulated image. It sounds confusing, but try to run the following commands (press any key to move to the next image)

> # make sense of the input
> ( yes 255 | csv-to-bin ub ) | cv-cat --input 'no-header;rows=64;cols=64;type=ub' 'count;view=0;null'
> # accumulate as sliding window of size 4
> ( yes 255 | csv-to-bin ub ) | cv-cat --input 'no-header;rows=64;cols=64;type=ub' 'count;accumulate=4;view=0;null'
> # accumulate as sliding window of size 4 in reverse order
> ( yes 255 | csv-to-bin ub ) | cv-cat --input 'no-header;rows=64;cols=64;type=ub' 'count;accumulate=4,,reverse;view=0;null'
> # accumulate images in fixed order
> ( yes 255 | csv-to-bin ub ) | cv-cat --input 'no-header;rows=64;cols=64;type=ub' 'count;accumulate=4,fixed;view=0;null'
> # accumulate images in fixed order (reverse)
> ( yes 255 | csv-to-bin ub ) | cv-cat --input 'no-header;rows=64;cols=64;type=ub' 'count;accumulate=4,fixed,reverse;view=0;null'

For example, if you want to create an image from fixed number of tiles, you could run something like this:

> ( yes 255 | csv-to-bin ub ) | cv-cat --fps 1 --input 'no-header;rows=64;cols=64;type=ub' 'count;accumulate=4,fixed;untile=2,2;view=0;null'

Say, you process images, but would like to view them in the middle of your pipeline in a different way (e.g. increase their brightness, resize, etc).

Now, you can do it with cv-cat tee. For example:

> # make a test image
> ( echo 20170101T000000,64,64,0 | csv-to-bin t,3ui; yes 255 | head -n $(( 64 * 64 )) | csv-to-bin ub ) > image.bin
> # observe that the images viewed in tee are passed unmodified down the main pipeline for further processing
> for i in {1..100}; do cat image.bin; done | cv-cat --fps 1 "count;tee=invert|view;resize=2" | cv-cat "view;null"

You could specify (almost) any pipeline in your tee filter, but viewing and, perhaps, saving intermediate images in files seem so far the main use cases.

Recently, we found that cv-cat view stopped working properly, when used several times in the same cv-cat call.

Something like

> cat images.bin | cv-cat "view;invert;view;null"

would either crash or behave in undefined way. All our debugging has pointed to some sort of race condition in the underlying cv::imshow() call or deeper in X-windows-related stuff, thus, at the moment, it seems to be out of our control.

Use the following instead:

> cat images.bin | cv-cat "view;invert" | cv-cat "view;null"

cv-cat is now able to perform pixel clustering by color using the k-means algorithm.

for example:

> cv-cat --file rippa.png "convert-to=f,0.0039;kmeans=4;view;null" --stay

input image:

output image (4 clusters):

A new convenience utility ros-from-csv is now available in snark. It reads CSV records and converts them into ROS messages with the usual conveniences of csv streams (customised fields, binary format, stream buffering/flushing, etc).

Disclaimer: ros-from-csv is a python application and therefore may not perform well streams that require high bandwidth or low latency.

You could try it out, using the ROS tutorial Understanding Topics (http://wiki.ros.org/ROS/Tutorials/UnderstandingTopics):

Run ROS Tutorial nodes:

> # in a new shell
> roscore
> # in a new shell
> rosrun turtlesim turtle_teleop_key

 

Send your own messages on the topic, using ros-from-csv:

> echo 1,2,3,4,5,6 | ros-from-csv /turtle1/cmd_vel

Or do a dry run:

> echo 1,2,3,4,5,6 | ros-from-csv /turtle1/cmd_vel --dry
linear: 
  x: 1.0
  y: 2.0
  z: 3.0
angular: 
  x: 4.0
  y: 5.0
  z: 6.0

You also can explicitly specify message type:

> # dry run
> echo 1,2,3 | ros-from-csv --type geometry_msgs.msg.Point --dry
x: 1.0
y: 2.0
z: 3.0
 
> # send to a topic
> echo 1,2,3 | ros-from-csv --type geometry_msgs.msg.Point some-topic

A new convenience utility ros-to-csv is now available in snark. It allows to output as CSV the ROS messages from rosbags or from topics published online.

You could try it out, using the ROS tutorial Understanding Topics (http://wiki.ros.org/ROS/Tutorials/UnderstandingTopics):

Run ROS Tutorial nodes:

> # in a new shell
> roscore
> # in a new shell
> rosrun turtlesim turtlesim_node
> # in a new shell
> rosrun turtlesim turtle_teleop_key

Run ros-to-csv; then In the shell where you run turtle_teleop_key, press arrow keys to observe something like:

> # in a new shell
> ros-to-csv /turtle1/cmd_vel --verbose
ros-to-csv: listening to topic '/turtle1/cmd_vel'...
0,0,0,0,0,-2
0,0,0,0,0,2
-2,0,0,0,0,0
-2,0,0,0,0,0
0,0,0,0,0,2
0,0,0,0,0,-2
2,0,0,0,0,0
0,0,0,0,0,2

If you log some data in a rosbag:

> # in a new shell
> rosbag record /turtle1/cmd_vel

You could convert it to csv with a command like:

> ros-to-csv /turtle1/cmd_vel --bag 2017-11-06-14-43-34.bag
2,0,0,0,0,0
0,0,0,0,0,-2
-2,0,0,0,0,0
0,0,0,0,0,2
0,0,0,0,0,2
0,0,0,0,0,-2
2,0,0,0,0,0

Sometimes, you have a large file or input stream that is mostly sorted, which you would like to fully sort (e.g. in ascending order).

More formally, suppose, you know that for any record Rn in your stream and any records Rm such that m - n > N, Rn < Rm, where N is constant.

Now, you can sort such a stream, using csv-sort, --sliding-window=<N>:

 

> ( echo 3echo 1; echo 2; echo 5echo 4 ) | csv-sort --sliding-window 3 --fields a
0
1
2
3
> ( echo 4echo 5echo 2echo 1echo 3 ) | csv-sort --sliding-window 3 --fields a --reverse
3
2
1
0

As usual, you can sort by multiple key fields (e.g. csv-sort --sliding-window=10 --fields=a,b,c), sort block by block (e.g. csv-sort --sliding-window=10 --fields=t,block), etc.

Sometimes, you have a large file or input stream that is mostly sorted by some fields with just a few records out of order now and then. You may not care about those few outliers, all you want is most of your data sorted.

Now, you can discard the records out of order, using csv-sort, e.g:

> ( echo 0; echo 1; echo 2; echo 1; echo 3 ) | csv-sort --discard-out-of-order --fields a
0
1
2
3
> ( echo 3; echo 2; echo 1; echo 2; echo 0 ) | csv-sort --discard-out-of-order --fields a --reverse
3
2
1
0

As usual, you can sort by multiple key fields (e.g. csv-sort --discard-out-of-order --fields=a,b,c), sort block by block (e.g. csv-sort --discard-out-of-order --fields=t,block), etc.

The ratio and linear-combination operations of cv-cat have been extended to support assignment to multiple channels. Previously, these operations would take up to 4 input channels (symbolically always named r, g, b, and a, regardless of the actual contents of the data) and produce a single-channel, grey-scale output. Now you can assign up to four channels:

ratio syntax
... | cv-cat "ratio=(r-b)/(r+b),(r-g)/(r+g),r+b,r+g"

The right-hand side of the ratio / linear combination operations contains comma-separated expressions defining each of the output channels through the input channels. The number of output channels is the number of comma-separated fields, it may differ from the number of input channels. As a shortcut, an empty field, such as in

ratio syntax shortcut
... | cv-cat "ratio=,r+g+b,"

is interpreted as channel pass-through. In the example above the output has three channels, with channels 0 and 2 assigned verbatim to the input channels 0 and 2 (r and b, symbolically), and the channel 1 (symbolic g) assigned to the sum of all three channels.

As yet another shortcut, cv-cat provides a shuffle operation that re-arranges the input channels without changing their values:

shuffle syntax
... | cv-cat "shuffle=b,g,r,r"

In this case, the order of the first 3 channels is reversed, while the former channel r is also duplicated into channel 3 (alpha). Internally, shuffling is implemented as a restricted case of linear combination, and therefore, other usual rules apply: the number of output channels is up to 4, it does not depend on the number of input channels, and an empty field in the right-hand side is interpreted as channel pass-through.

When using view-points, there often is a need to quickly visualise or hide several point clouds or other graphic primitives.

Now, you can group data in view-points, using groups key word. A source can be assigned to one or more groups by using the groups arguments. Basic usage is:

view-points "...;groups=g1,g2"

For example if we have two graphs as follows:

$ cat <<EOF > edges01.csv
1,1,0,4,4,0
4,4,0,4,8,0
4,4,0,8,4,0
EOF

$ cat <<EOF > nodes01.csv
1,1,0,Node00
4,4,0,Node01
4,8,0,Node02
8,4,0,Node03
EOF

$ cat <<EOF > edges02.csv
4,9,1,4,12,1
4,12,1,0,9,1
4,9,1,0,4,1
0,4,1,0,9,1
EOF

$ cat <<EOF > nodes02.csv
0,4,1,Node20
0,9,1,Node21
4,9,1,Node22
4,12,1,Node23
EOF

We can separate the graphs as well as group together nodes and edges of different graphs as follows:

$ view-points "nodes01.csv;fields=x,y,z,label;colour=yellow;weight=5;groups=graph01,nodes,all" \
	"edges01.csv;fields=first/x,first/y,first/z,second/x,second/y,second/z;shape=line;colour=yellow;shape=line;groups=graph01,edges,all" \
	"nodes02.csv;fields=x,y,z,label;colour=green;weight=5;groups=graph02,nodes,all" \
	"edges02.csv;fields=first/x,first/y,first/z,second/x,second/y,second/z;shape=line;colour=green;shape=line;groups=graph02,edges,all"

Try to switch on/off checkboxes for various groups (e.g. "graph01", "nodes", etc) and observe the effect.

A quick note on new operations in cv-calc utility. Time does not permit to present proper examples, but hopefully, cv-calc --help would be sufficient to give you an idea.

cv-calc grep

Output only those input images that conform a certain condition. Currently, only min/max number or ratio of non-zero pixels is supported, but the condition can be any set of filters applied to the input image (see cv-cat --help --verbose for the list of the filters available).

Example: Output only images that have at least 60% of pixels darker than a given threshold:

> cat images.bin | cv-calc grep --filters="convert-to=f,0.0039;invert;threshold=0.555" --non-zero=ratio,0.6

cv-calc stride

Stride to the input image with a given kernel (just like a convolution stride), output resulting images.

cv-calc thin

Thin the image stream by a given rate or a desired frames-per-second number.

csv-shape is a new utility for various operations on reshaping csv data.

For now, only one operation is implemented: concatenate:

Concatenate by Grouping Input Records

> ( echo 1,a; echo 2,b; echo 3,c; echo 4,d; ) | csv-shape concatenate -n 2
1,a,2,b
3,c,4,d

Note: For ascii text inputs the records do not have to be regular or even have the same number of fields.

Concatenate by Sliding Window

ASCII:

> ( echo 1,a; echo 2,b; echo 3,c; echo 4,d; ) | csv-shape concatenate -n 2 --sliding-window
1,a,2,b
2,b,3,c
3,c,4,d

Binary:

> ( echo 1,a; echo 2,b; echo 3,c; echo 4,d; ) | csv-to-bin ui,c | csv-shape concatenate -n 2 --sliding-window --binary ui,c | csv-from-bin ui,c,ui,c
1,a,2,b
2,b,3,c
3,c,4,d

This is a brief introduction to cv-cat new filters:

 

Filter: Accumulated

This filter is used to calculate pixel-wise (and channel-wise) average from the sequential series of input images.

As it relies on the sequential accumulated input images, this filter is run in serial mode in cv-cat. This as implications when used with 'forked' image processing.

However parallel processing is utilised on image rows dimension.

Please download the following file which contains a total of 8 images: images.bin: 8 images showing movement. Viewing the images:

cat images.bin | cv-cat "view=250;null" 


Average:

Calculating averages using all accumulated input images, the output is also 8 images.

cat images.bin | cv-cat "accumulated=average;view=250;null"

The 6th output image is the average of all 6 accumulated images, the 7th is the average of the 7 accumulated input images.

 

Exponential Moving Average (EMA):

Calculating the average using a sliding window of images. Here a sliding window of 3 images is used.

cat images.bin | cv-cat "accumulated=average,3;view=250;null"

The output is 8 images, the 6th image is the accumulation of image 1 to 6. Please research the simple EMA formula.

 

Forked Arithmetic Filters: Multiply, Divide, Add and Subtract

This group of filters work similar to the mask filer: Masking images with cv-cat, they both use a sub-filters to generate a mask or operand image. 

A mask has values of 0 or '> 0' mask file to apply to the image, a corresponding pixel in the mask with a value of 0 is masked. The arithmetic filters work on operand images where is pixel value is important.

Multiply:

This filter will do pixel-wise multiplication the operand image and the input image. It wraps cv::multiply function.

Please download this simple mask file: scaled-3f.bin

#Viewing the mask
cat scaled-3f.bin | cv-calc header
cat scaled-3f.bin| cv-cat "view=1000;null"

Applying a single scaled image to the input images:

cat images.bin | cv-cat "multiply=load:scaled-3f.bin;view=250;null"

You should see images similar to below. scaled-3f.bin has values in the range of 0 to 1.0, the command above will darken the images.

From the example above: cv-cat's multiply is run in parallel, multiple input images are applied the scaled-3f.bin file in parallel.

This is because the all sub-filter(s) can run in parallel mode, in this case there is only one sub-filter 'load'.

The example below also shows multiply running in parallel mode as as load and threshold are parallel-able filters.

cat images.bin | cv-cat "multiply=load:scaled-3f.bin|threshold:0.7,1;view=250;null"


Subtract:

This filter simply subtract the operand image from each input image. This is a wrapper to cv::subtract.

The operand image is derived from the sub-filters. In this example we shall use the accumulated filter mentioned earlier. This is a simple method for detecting moving objects in the image.

cat images.bin| cv-cat "subtract=accumulated:average,3;brightness=5;view=250;null"

Each input image is subtracted the EMA average, where the EMA window is 3.

You should see similar images shown below:

In the example above: the multiply filter is run in serial mode. This is because one of the sub-filter or sub-filters ('accumulated' in this case) can only be run in serial mode.

If you have a webcam handy or it is built into the laptop, try this command:

cv-cat --camera "subtract=accumulated:average,10;view;null"



Add:

This is a wrapper to cv::add

This filter is the opposite of subtract. In this case if you add the EMA average (the "background") to the input images. Any moving object becomes transparent.

cat images.bin| cv-cat "add=accumulated:average,3;view=250;null"

This is the result:

Of course you can always try this pipeline with a physical camera:

cv-cat --camera "subtract=accumulated:average,10;view;null"

 

Divide:

This filter wraps cv::divide, divides the input images by the operand.

The file scaled-3f.bin has values in the range 0 to 1.0, hence dividing the image by scaled-3f.bin will brighten the image.

Arithmetic filters: the output image type is the same as the input image type.

cat images.bin| cv-cat "divide=load:scaled-3f.bin;view=250;null"

 

 

A brief notification on the latest additions to cv-cat (and all other camera applications linking in the same filters).

As of today, the application provides access to all the morphology operations available in OpenCV:

  1. erosion
  2. dilation
  3. opening
  4. closing
  5. morphological gradient
  6. top-hat
  7. black-hat

See OpenCV documentation for more details. In addition, a skeleton (a.k.a. thinning) filter is implemented on top of the basic morphological operations. The implementation follows this demo. However, this is neither the fastest nor the best implementation of thinning. Possibly the optimal approach is proposed in the paper "A fast parallel algorithm for thinning digital patterns" by T.Y. Zhang and C.Y. Suen. See this demo for comparative evaluation of several thinning algorithms (highly recommended!)

Some examples of usage are given below.

Erosion

Input image

Processing

erosion
cv-cat --file spots.png "erode=circle,9,;encode=png" --output=no-header > eroded.png

Result

Multiple Iterations

OpenCV allows multiple iterations of the same morphology operation, the default iterations number is 1. Below is the same erosion operation applied twice (please see cv-cat's help):

cv-cat --file spots.png "erode=circle,9,,2;encode=png" --output=no-header > eroded-twice.png

Result

Thinning

Input image

Processing

thinning
cv-cat --file opencv-1024x341.png "channels-to-cols;cols-to-channels=0,repeat:3;skeleton=circle,3,;encode=png" --output=no-header > skeleton.png

Result

csv-calc

csv-calc is an application to calculate statistics (such as mean, median, size, standard deviation...) on multiple fields of an input file. Input records can be grouped by id, block, or both.

One drawback of csv-calc is that it only outputs the statistics for each id and block. The input records themselves are not preserved. This means that you cannot use csv-calc as part of a pipeline.

csv-calc --append

The --append option to csv-calc passes through the input stream, adding to every record the relevant statistics for its id and block.

For example:

> echo -e "1,0\n2,0\n3,1\n4,1" | csv-calc mean --fields=a,id
Output (mean, id):
1.5,0
3.5,1
 
> echo -e "1,0\n2,0\n3,1\n4,1" | csv-calc mean --fields=a,id --append
Output (a, id, mean):
1,0,1.5
2,0,1.5
3,1,3.5
4,1,3.5

keeping track of fields and formats

Another challenge for csv-calc users is the large number of fields that it generates (it applies every operation to every indicated field).

There are now --output-fields and --output-format options to show what kind of output a given csv-calc command will produce.

Examples:

> csv-calc mean,diameter --fields=t,a,id,block --binary=t,d,ui,ui --output-fields
t/mean,a/mean,t/diameter,a/diameter,id,block
 
> csv-calc mean,diameter --fields=t,a,id,block --binary=t,d,ui,ui --output-format
t,d,d,d,ui,ui

With --append, these fields are appended to input fields:
id and block are not repeated 
> csv-calc mean,diameter --fields=t,a,id,block --binary=t,d,ui,ui --output-fields --append
t/mean,a/mean,t/diameter,a/diameter

> csv-calc mean,diameter --fields=t,a,id,block --binary=t,d,ui,ui --output-format --append
t,d,d,d

points-to-ros and points-from-ros are utilities for publishing and receiving PointCloud2 message on ROS.

setup

To build them you need to set "snark_build_ros" to ON in snark cmake.

we use snark-graphics-test-pattern to generate some sample points in a cube:

snark-graphics-test-pattern cube 100000 0.1 0.01 >cube.csv

Here is the output: cube.csv

To run ROS, you need to setup the environment and run roscore:

source /opt/ros/kinetic/setup.bash
roscore

 

points-from-ros

This utility subscribes to the specified topic and receives PointCloud2 messages, then it writes the point data as csv or binary to stdout.

Either --binary or --format option must be specified, which sets the output to be binary or ascii csv respectively.

The field names and message format are embedded in the message, the format is used for conversion.

You can use --output-fields or --output-format to get the field names and message format from message (the publisher must be running).

source /opt/ros/kinetic/setup.bash
points-from-ros --topic "/points1" --output-fields
points-from-ros --topic "/points1" --output-format
#ascii
points-from-ros --topic "/points1" --fields x,y,z,r,g,b --format 3d,3ub | view-points --fields x,y,z,r,g,b
#binary
points-from-ros --topic "/points1" --fields x,y,z,r,g,b --binary 3d,3ub | view-points --fields x,y,z,r,g,b --binary 3d,3ub

 

points-to-ros

This utility reads binary or ascii csv data from stdin and publishes it as PointCloud2 message on ROS.

Either --binary or --format option must be specified, which indicates whether input is binary or ascii.

The --fields options specifies the field names for one point in the message.

If a field named block is present it will be used for breaking records into separate messages, records with the same block number will be grouped into one message. When no such field is present it will read the stdin until EOF and then send one message.

The --hang-on option delays the points-to-ros exit, so that the clients can receive all the data on the last message.

#ascii
cat cube.csv | points-to-ros --topic "/points1" --fields x,y,z,r,g,b,a --format 3d,3ub,ub --hang-on
#binary
cat cube.csv | csv-to-bin 3d,3ub,ub | points-to-ros --topic "/points1" --fields x,y,z,r,g,b,a --binary 3d,3ub,ub --hang-on

 

ros-bag-to-bin

this utility can directly cat binary data from a ros bag file

ros-bag-to-bin -h
ros-bag-to-bin [-h] [--timestamp] [--block] file topic size
e.g. 
ros-bag-to-bin "pointscloud.bag" "/velodyne_points" $(csv-size 4f) --timestamp --block | csv-from-bin t,ui,4f | head

 

 

The problem

You are working with a data pipeline, and on a certain record, you want to end processing and exit the pipeline.

But to break on some condition in the input, you need an application that parses each input record.

Worse, the condition you want could be a combination of multiple fields, or use fields unrelated to the data you want to process.

Introducing csv-eval --exit-if!

Previously csv-eval had a --select option that passed through any records that matched the select condition.

csv-eval --exit-if also passes through input records unchanged, but it exits on the first record matching the exit condition.

Like csv-eval --select you can use any expression on the input that evaluates to bool.

Comparing the two features:

$ echo -e "1,1\n2,2\n3,0\n1,3\n1,2" | csv-eval --fields=a,b --select="a+b<>3"
Output:
1,1
2,2
1,3

$ echo -e "1,1\n2,2\n3,0\n1,3\n1,2" | csv-eval --fields=a,b --exit-if="a+b==3"
Output:
1,1
2,2

 

 

Space contributors

{"mode":"list","scope":"descendants","limit":"5","showLastTime":"true","order":"update","contextEntityId":40044058}

Blog Posts

 

 

 

 

  • No labels