Page tree
Skip to end of metadata
Go to start of metadata

This video shows our latest work in light interception modelling for orchards. Tree geometry is modelled using point clouds from a handheld LiDAR, while public weather data is used to model the sky at a given time and date. The light in the sky is then ray-traced through the tree to provide an estimate of light interception, distribution and absorption. The model can be applied to decision support systems, for example to inform optimal pruning practices. For more information see our arxiv preprint.

This video shows our latest results in mango fruit detection, localisation and mapping. The multi-sensor robot 'Shrimp' acquires data with a variety of different sensors, including lidar for tree canopy segmentation and colour vision for fruit detection and triangulation. This is arguably the world's most accurate system for mapping individual whole fruit in commercial orchards, while the fruit is still on the tree. Compared to post-harvest yield estimates for individual trees, the system counts accurately (linear fit, near unity slope of 0.96 and r^2 value of 0.89). The system has now been validated on two subsequent seasons, with the third planned later this year (2017). Scanning is performed 2 months before harvest time, meaning there's plenty of opportunity to use it for precision agriculture and on-farm decision making, towards optimised fruit production.

A new three year program of high tech R&D for orchard management has begun, with the use of our Shrimp robot to acquire data from mango, avocado and macadamia orchards.

http://www.abc.net.au/news/2016-02-04/mapping-australias-tree-crops/7137014

The data includes lidar, vision, thermal, hyperspectral, soil conductivity and natural gamma, demonstrating that there are many ways to view the humble tree:

 

James Underwood gave a talk about autonomous information systems for tree crops, at the APAL speed updating session, alongside the National Horticulture Convention on the Gold Coast in June 2015. All the talks are available here.

 

During our recent field trip to the Yarra Valley, we demonstrated autonomous row following for the trellis structured apple configuration. The system worked reliably, and we used it to gather data for yield prediction for approximately 30 rows of apples.

This video shows Shrimp driving fully autonomously in an apple orchard in the Yarra Valley, Australia. It uses a 360 degree lidar to guide it along the row (no need for GPS).

Unlike Mantis, the 2D lidars on Shrimp are looking sideways to scan the trees, so the 360 degree Velodyne sensor was used instead. To emulate a lower cost 2D lidar, only one of the 64 Velodyne lasers was processed. We used the autonomous system to obtain fruit yield data from approximately 30 rows of the farm without error.

This is a demonstration on our research platform, but the technology could easily be applied to any existing or new farm equipment, enabling smart farm vehicles to act as assistants to farmers.

We are interested in using robotic manipulators for harvesting and weeding applications.

This video from our field lab illustrates our concept for how a robot arm might look in performing a harvesting task.

This video shows some of the data and first processing from our recent trip to a banana plantation.

We gathered data from a banana plantation near Mareeba in the far north of Australia, at the end of 2013. Using Shrimp , we drove up and down rows of the plantation, acquiring 3D maps and image data.

Farmers typically use a system of coloured bags to denote the expected harvest date, which can be detected and mapped by the system. It is also hoped that in the future, growth rates of shoots or 'suckers' can be measured, to predict maturation times of the fruit directly, many months in advance.

One of our research ground vehicles, Mantis, was used to successfully demonstrate autonomy at an almond orchard.

The robot uses its forward looking laser to estimate the geometry of the tree foliage in front, enabling it to drive along the row without needing GPS. Additionally, it can detect people out in front, slowing down and coming to a safe halt.

The video shows a conceptual demonstration of how this could be used as a farmer assistance mechanism, whereby the vehicle could accompany a farmer, carrying heavy loads such as buckets of fruit, or towing other forms of equipment. Although demonstrated on one of our Perception Research Ground Vehicles (Mantis), the core technology can easily be applied to existing or new farm machinery.

GPS can be unreliable under canopied environments, due to occlusions between the vehicle and satellites. Therefore, forms of autonomy that require no GPS are likely to be more reliable.

Apple Counting and Yield Estimation
Using camera data, we have developed algorithms to segment individual apples, and then use the apple count to perform apple yield estimation.

Images are collected as "Shrimp" surveys the orchard. The algorithm classify and count apples in each image and provide yield estimation for each row "Shrimp" surveyed. This yield estimation can give the farmer an early indication of potential yield and allows the farmer to refine and optimise the farm operation.

 

Using lidar (laser) data, individual trees can be segmented, counted and mapped, allowing information on the farm to be associated per tree.

As "Shrimp" drives along a row of an orchard, 3D maps are built from laser data. From this, we can segment and recognise individual trees, which is useful for data management. For example, when combined with our yield estimation techniques, it allows us to measure and associate the yield of each individual tree. This can be used to track information, such as yield, over time and it can be used actively, for example, to target autonomous or computer assisted spray trucks with spray programs for each tree.

 

 

 

  • No labels