Blog

Earlier this month our undergraduate droid racing teams took their robots to the international Droid Racing event in Brisbane. In a field of 15 teams from around Australia and New Zealand, they took home 2nd and 3rd place!

No videos of the 2019 event is up yet, so here's a photo and some video from the 2017 event:

RoboCup 2019

The ACFR at RoboCup 2019

Thanks to an elite band of volunteers, the ACFR were able to run a booth at this year's RoboCup event.

RoboCup is an international robotics competition that's been running for over 20 years.  It started out focused on robotic soccer, and has since branched out to include industrial, rescue, and home robotics events.

It's a very well-attended event, you can get an idea of the scale from the album here: https://photos.app.goo.gl/AUSKVJo24C3KegRM8 and find more information on the competition here: https://2019.robocup.org/


 

Charika De Alvis, Mao Shan, Stewart Worrall, and Eduardo Nebot

 

Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables a precise range of information that can be projected onto the visual image data. This gives a high-level understanding of the scene which can be used to enable context-based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such, it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected into the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.

@Philip Gun, Andrew Hill, and Robin Vujanic

This paper addresses the problem of planning time-optimal trajectories for multiple cooperative agents along specified paths through a static road network. Vehicle interactions at intersections create non-trivial decisions, with complex flow-on effects for subsequent interactions. A globally optimal, minimum time trajectory is found for all vehicles using Mixed Integer Linear Programming (MILP). Computational performance is improved by minimising binary variables using iteratively applied targeted collision constraints, and efficient goal constraints. Simulation results in an open-pit mining scenario compare the proposed method against a fast heuristic method and a reactive approach based on site practices. The heuristic is found to scale better with problem size while the MILP is able to avoid local minima.

Paper: https://arxiv.org/pdf/1810.02517.pdf

 

Nathan Wallace, He Kong, Andrew J. Hill and Salah Sukkarieh

The control of field robots in varying and uncertain terrain conditions presents a challenge for autonomous navigation. Online estimation of the wheel-terrain slip characteristics is essential for generating the accurate control predictions necessary for tracking trajectories in off-road environments. Receding horizon estimation (RHE) provides a powerful framework for constrained estimation, and when combined with receding horizon control (RHC), yields an adaptive optimisation-based control method. Presently, such methods assume slip to be constant over the estimation horizon, while our proposed structured blocking approach relaxes this assumption, resulting in improved state and parameter estimation. We demonstrate and compare the performance of this method in simulation, and propose an overlapping-block strategy to ameliorate some of the limitations encountered in applying noise-blocking in a receding horizon estimation and control (RHEC) context.

 

Preprint: https://arxiv.org/abs/1810.04366

Wei Zhou, Alex Zyner, Stewart Worrall, and Eduardo Nebot

Abstract: Semantic segmentation using deep neural networks has been widely explored to generate high-level contextual information for autonomous vehicles. To acquire a complete 180 semantic understanding of the forward surroundings, we propose to stitch semantic images from multiple cameras with varying orientations. However, previously trained semantic segmentation models showed unacceptable performance after significant changes to the camera orientations and the lighting conditions. To avoid time-consuming hand labeling, we explore and evaluate the use of data augmentation techniques, specifically skew and gamma correction,from a practical real-world standpoint to extend the existing model and provide more robust performance. The experimental results presented have shown significant improvements with varying illumination and camera perspective changes. A comparison of the results from a high-performance network (PSPNet), and a real-time capable network (ENet) is provided.


Paper: https://ieeexplore-ieee-org.ezproxy1.library.usyd.edu.au/stamp/stamp.jsp?tp=&arnumber=8603770


 

 

Vera L. J. Somers and Ian R. Manchester

Unmanned Aerial Vehicle (UAV) path planning algorithms often assume a knowledge reward function or priority map, indicating the most important areas to visit. In this paper we propose a method to create priority maps for monitoring or intervention of dynamic spreading processes such as wildfires. The presented optimization framework utilizes the properties of positive systems, in particular the separable structure of value (cost-to-go) functions, to provide scalable algorithms for surveillance and intervention. We present results obtained for a 16 and 1000 node example and convey how the priority map responds to changes in the dynamics of the system. The larger example of 1000 nodes, representing a fictional landscape, shows how the method can integrate bushfire spreading dynamics, landscape and wind conditions. Finally, we give an example of combining the proposed method with a travelling salesman problem for UAV path planning for wildfire intervention.

 

 

Preprint: https://arxiv.org/abs/1903.11204

Donald Dansereau, Bernd Girod, Gordon Wetzstein

Feature detectors and descriptors are key low-level vision tools that many higher-level tasks build on. Unfortunately these fail in the presence of challenging light transport effects including partial occlusion, low contrast, and reflective or refractive surfaces. Building on spatio-angular imaging modalities offered by emerging light field cameras, we introduce a new and computationally efficient 4D light field feature detector and descriptor: LiFF. LiFF is scale invariant and utilizes the full 4D light field to detect features that are robust to changes in perspective. This is particularly useful for structure from motion (SfM) and other tasks that match features across viewpoints of a scene. We demonstrate significantly improved 3D reconstructions via SfM when using LiFF instead of the leading 2D or 4D features, and show that LiFF runs an order of magnitude faster than the leading 4D approach. Finally, LiFF inherently estimates depth for each feature, opening a path for future research in light field-based SfM.

Preprint: https://arxiv.org/abs/1901.03916

Supplemental: https://arxiv.org/src/1901.03916v1/anc/LiFF_Supplemental.pdf

Code and dataset: http://dgd.vision/Tools/LiFF/

Dorian Tsai, Donald Dansereau, Thierry Peynot and Peter Corke

To be effective, robots will need to reliably operate in scenes with refractive objects in a variety of applications; however, refractive objects can cause many robotic vision algorithms, such as structure from motion, to become unreliable or even fail. We propose a novel method to distinguish between refracted and Lambertian image features using a light field camera.
For previous refracted feature detection methods that are limited to light field cameras with large baselines relative to the refractive object, our method achieves state-of-the-art performance. We extend these capabilities to light field cameras with much smaller baselines than previously considered, where we achieve up to 50% higher refracted feature detection rates. Specifically, we propose to use textural cross-correlation to characterise apparent feature motion in a single light field, and compare this motion to its Lambertian equivalent based on 4D light field geometry.
For structure from motion, we demonstrate that rejecting refracted features using our distinguisher yields lower reprojection error, lower failure rates, and more accurate pose estimates when the robot is approaching refractive objects. Our method is a critical step towards allowing robots to operate in the presence of refractive objects.

Paper: https://ieeexplore.ieee.org/document/8556460

This ACFR-wide blog is to help us keep up to date across the centre.  Post your major milestones, papers, grant successes, awards, media appearances, and other noteworthy news and outcomes here.

All accepted papers will get a blog post.  To create a new blog post use "Create from template", the three dots next to the "Create" button in the banner.  Follow this general format:

Blog Post Title: <venue>: <paper title>

Blog Post Body:

<@authors>

<abstract>

<optional teaser image>

<attach or link to preprint, and if applicable links to dataset, code, video>

Check the blog for examples.