As of May 2020 table view was replaced with Calendar

DateSpeakerTopic
---2020---

Thurs 16 July 2020 1p



Thurs 9 July 2020 5pCosimo Della Santina, TUM, DLRhttp://www.cosimodellasantina.com/
Thurs 2 July 2020 1pTitle: Path Following of Underactuated Mechanical Systems: An Energy Perspective
Thurs 25 Jun 2020 1pProf Dana Kulic, Monashhttps://research.monash.edu/en/persons/dana-kulic
Thurs 18-Jun-2020

Title: A Transfer Learning approach to Space Debris Classification through Light Curve Analysis

Overview: In this seminar I will be present progress in my research into space debris classification through the application of transfer learning to light curves extracted from telescope data. The development of a generalised characterisation method for space debris is a significant goal in Space Situational Awareness (SSA) in order to mitigate the risk to both current and future space missions.  A dataset of real light curves is being collected and curated using telescope data provided by an industry partner.  Due to the difficulties in labelling the real data and obtaining large quantities of observational data, a high-fidelity Blender based simulation environment has also been developed with the ability to generate light curves for a range of space objects.  This simulated dataset will be used for pre-training models with the aim of improving classification results on the real dataset.      

Bio: James graduated with a Bachelor of Aeronautical Space Engineering from the University of Sydney in 2016 and is currently a PhD student at ACFR.

Thurs 11-Jun-2020

Thesis Seminar

Thurs 4-Jun-2020

Title: Intelligent Robotic Non-Chemical Weeding

Abstract:  In this seminar I will present results from an industry research project with the Grains Research and Development Corporation: Intelligent Robotic Non-Chemical Weeding.

Weeds compete with crop for nutrients, water and sun light. Controlling weeds and lost revenue is estimated to cost grain growers $3.3 billion a year. Early reliance on herbicides as a complete weed control solution has made contemporary weed management a challenge due to the increasing prevalence of herbicide resistant weeds. An emerging tool in weed management and precision agriculture is site-specific weed management. In site-specific weed management, individual weeds are detected and targeted by a control method. Since this task cannot be solved with a purely mechanical system, it is only feasible at a small scale and requires a large amount of human labour. Robotic systems combined with computer vision and machine learning hold the potential to change the scalability and labour costs of site-specific weed management. Currently commercial robotic site-specific weed management is limited to fallow land where infra-red reflectance can be used to perform ‘green-on-brown’ detection. In this research project, the goal is to extend autonomous weeding capabilities to ‘green-on-green’ detection - that is, detecting weeds amongst crop. 

BioDr Asher Bender is a post-doctoral researcher at the Australian Centre for Field Robotics. His research interest is in applying machine learning to solve high-level problems using data collected by autonomous systems. He has worked in marine robotics, intelligent transportation and is currently doing research in agricultural robotics.

Thurs 28-May-2020 9a

Asst Prof Matthew O'Toole, CMU Robotics

Homepagehttp://www.cs.cmu.edu/~motoole2/
Tues 26 May 2020 2p

Thesis Seminar

Thesis Title: Analysing the Robustness of Semantic Segmentation for Autonomous Vehicles

Abstract:  Intelligent systems require the capability to perceive and interact with the surrounding environment. Semantic segmentation, as a pixel-level classification task, is at the frontier of providing a human-like understanding to intelligent systems enabling them to view and understand the world as we do. Deep learning based semantic segmentation algorithms have shown considerable success for certain tasks in recent years. However, in real-world safety critical applications such as autonomous vehicles, there are still many complexities that restrict the use of this technology. My research has been focusing on analysing the generalisation and the robustness of semantic segmentation for intelligent vehicles. A system validation pipeline has been proposed to tackle the challenges of evaluating and quantifying the performance of semantic segmentation before deploying to intelligent platforms. This method can be used in most urban traffic scenarios without the time and expense of using humans to generate labels by hand.

Bio: Wei graduated with Bachelor of Engineering Degrees from the Beijing Institute of Technology and the Australian National University in 2013 and submitted her PhD thesis in March 2020 at ACFR.

Thurs 21-May-2020 1p

Title: Long-term map maintenance pipeline for autonomous vehicles

AbstractOne of the requirements for autonomous vehicles to be able to operate persistently in typical urban environments is to maintain high accuracy position information over time, in other words, the capability of their mapping and localisation system to adapt to the changes. The classic definition of localisation based on a single-survey map is not suitable for long-term operation due to its inadequacy to detect and incorporate the variations of the ambience. In this work, we present a process to adjust a featured-based map to the actual environment, this adaptation pipeline seeks to lessen or ward off possible localisation difficulties while taking advantage of the changes of the surroundings. We incorporate different sensor modalities which provide information about the environment and the state of the moving platform.

Bio: Stephany graduated with a Bachelor of Mechatronic Engineering from Universidad Autonoma de Occidente (Colombia) and is now a PhD student at ACFR (ITS research group).

Thurs 7-May-2020 11a

Title: Efficient validation method for Highly Automated Vehicle to ensure safety

AbstractThe wide-scale deployment of Autonomous Vehicles (AV) seems to be imminent despite many safety challenges that are yet to be resolved. It is well known that there are no universally agreed validation methodologies to guarantee absolute safety, which is crucial for the acceptance of this technology. My research focus is to propose an efficient method that can enable better result than the existing approaches such as test matrix, distance-based validation, and Monte-Carlo simulation.

Bio: Dhanoop completed Bachelor of Computer Science and Engineering from India and Masters in IT from the University of Wollongong. He is now a PhD student at ACFR.

Thurs 30-Apr-2020 1p

Title: Creating an AUV - Post-Mortem

Abstract: Over the last few years the Marine Group has created an AUV from initial design concept through to a platform that has descended to 100m beneath the waves and captured over fifty-thousand images of sea-floor 1000km offshore of the NSW coastline. This seminar will give some context of the groups activities, thoughts behind the decision to design and build in-house before going through the roughly two year long process since first in water trials to being taken to Elizabeth and Middleton Reefs near Lord Howe Island with lessons learnt along the way.

Bio: Lachlan completed his PhD at the ACFR in 2016. His work has looked at cooperative localisation and underwater localisation related topics with a hands-on deviation to getting a new AUV from completed hardware to mission ready.

Thurs 23-Apr-2020 1pTitle: Computationally Efficient Dynamic Traffic Optimisation Of Railway Systems

Abstract: In this seminar, we discuss traffic optimisation for railway systems. These can be seen as multi-agent systems with movement constraints entailing logic conditions (e.g., precedence of utilisation of specific railway tracks) and, as such, the underlying optimisation programs are large NP-hard models. To limit computational complexity, we reduce optimisation horizons. This however makes trains “blind” to the presence of each other beyond the limits of these reduced horizons, with the potential to result in deadlocking. We present an approach to address this shortcoming borrowing notions from control systems theory. We also discuss other complexity reduction mechanisms enabled by this result.
We cover examples illustrating cases where the optimisation models determine traffic patterns that, according to feedback we received form several train controllers, surpass human ability. We also briefly touch upon challenges of deploying these automation techniques in real life in a commercial setting involving a large freight network owned by Rio Tinto.
We also take this opportunity to briefly present current open opportunities for work / collaboration at RTCMA.

Bio: Robin completed his PhD at ETH Zurich, Department of Electrical Engineering and Information Technology, under the supervision of Prof. Manfred Morari. His research focuses on the application of mathematical optimisation techniques, as well as computational methods for handling large scale systems and contexts subject to uncertainty. He has been Research Fellow at the Rio Tinto Centre for Mine Automation since 2016, and became team lead for the project "Pit to Port Optimisation" in 2019.
Thurs 16-Apr-2020 1p

Title:  Two-Level Hierarchical Planning in a Known Semi-Structured Environment

Abstract: The application of motion planning for autonomous vehicles has been primarily focused either in highly structured or unstructured environments. However, many environments in the real-world share the characteristics of both and can be classified as semi-structured. The adaptation of the strategies from other environments to that of semi-structured, although possible, do not produce trajectories with the required characteristics, especially when the environment is dynamic. In this talk I present a practical two-level hierarchical planning strategy consisting of a discrete lane-network-based global planner and Hybrid A* local planner that (i) generates a smooth, safe and kinematically feasible path in real-time; (ii) considers structural constraints of the environment from an a priori map. I will present preliminary results from the field test at Callan Park and future directions of the project.

Bio: Karan recently joined the ITS research group of ACFR after completing his PhD from UNSW in 2019. He is currently focussing on navigation strategies for autonomous vehicles.

Thurs 9-Apr-2020 1p

Title: Tree crop analysis using mobile LiDAR: From light to pruning

Abstract: Commercial fruit growers find it helpful to understand how their crops are growing on an orchard scale and tree scale.  This enables them to make decisions regarding actions like pruning.  There are a number of tools for measuring tree growth factors which involve the grower manually inspecting each tree, which is prohibitively difficult. Instead, we can use modern LiDAR technology to digitise the trees and perform detailed analyses in silico.  In this talk I present my work in using point cloud scans of tree crops from mobile handheld LiDAR to analyse the light interception and tree structure characteristics.  I will also discuss the possibility of using these methods to inform automatic pruning decisions, since historically these decisions are made by conventional wisdom rather than tree-specific optimisation.

Bio: Fred graduated with a Bachelor of Mechatronic Engineering and Computer Science from UNSW in 2016 and is now a PhD student at ACFR focusing on LiDAR applications in tree crops.

Thurs 2-Apr-2020 1p

Thurs 26-Mar-2020 1p



Thurs 6-Feb-2020 1pChihyung Jeon

Title: Talking over the robot: A field study of strained collaboration in a dementia-prevention robot class in South Korea

Abstract: I will present a case study of Silbot – a “dementia-prevention robot” – at a regional health center in South Korea, which I conducted with my graduate students at KAIST. From our on-site observation of the Silbot classes, we claim that the efficacy of the robot class relies heavily on the “strained collaboration” between the human instructor and the robot. “Strained collaboration” refers to the ways in which the instructor works with the robot, attempting to compensate for the robot’s functional limitation and social awkwardness. In bringing Silbot into the classroom setting, the instructor employs characteristic verbal tones, bodily movements, and other pedagogical tactics. The instructor even talks over the robot, downplaying its interactional capacity. We conclude that any success of such robot programs requires a deeper understanding of the spatial and human context of robot use, including the role of human operators or mediators and also that this understanding should be reflected in the design, implementation, and evaluation of robot programs.

Bio: Chihyung Jeon is an associate professor of science, technology, and policy at KAIST (Korea Advanced Institute of Science and Technology). He received his PhD degree in STS (Science, Technology & Society) at the Massachusetts Institute of Technology and has conducted research at the Max Planck Institute for the History of Science in Berlin and the Rachel Carson Center for Environment and Society in Munich. His research focuses on the sociocultural relationship between humans and technologies. He is currently working on cultures of AI and robotics in South Korea and participating in LIFEBOTS Exchange, an EU-funded research network on social robots for welfare and healthcare. He is also interested in the technologies and cultures of simulation, remoteness, and humanlessness. Within KAIST, he is also affiliated with the Center for Anthropocene Studies, where he is looking at scientific and public practices about aerial conditions.

Tue Jan-14-2020 2:00pSuda Bharadwaj

Title: Assured Autonomy for Complex Systems

Abstract: Over the last decade, there has been an explosion in the use of autonomous systems and artificial intelligence in our daily lives. As human reliance on autonomy grows, so do the consequences of autonomous agents failing to achieve their mission. One area poised to make a fundamental impact is urban air mobility (UAM). UAM refers to on-demand air transportation services within an urban area. With projections indicating high-volume use of autonomous aircraft in urban air spaces, it is clear that advances in decision-making for autonomous systems with assured performance will play a key role in the advancement and acceptance of UAM. In this talk I explore the use of techniques from the field of formal methods in order to provide theoretical guarantees of performance and safety in multi-agent systems such as UAM. While formal methods provides powerful tools to formally specify and guarantee complex high- level requirements, it suffers from a lack of scalability restricting its applicability for systems with multiple agents. We explore the use of runtime enforcement or shielding in order to guarantee safety of complex systems at runtime without knowledge of the underlying systems’ design or goals. I will present our work in decentralizing the synthesis procedure in order to allow for use in systems with large numbers of agents and demonstrate its effectiveness with some UAM-based examples.

Bio: Suda Bharadwaj is currently a PhD student in the U-T-Autonomous systems lab at the University of Texas at Austin, supervised by Dr. Ufuk Topcu. His research interests involve assured autonomy using formal methods, reinforcement learning, and control. He completed his undergraduate studies at the University of Sydney with a BE/BSc double degree majoring in Aeronautical (Space) engineering and Physics. He received his MS in Aerospace Engineering at the University of Texas at Austin.




DateSpeakerTopic
------2019------

Mon 16-Dec-2019 1:30pTobias Bellmann, DLR

Title: From Terramechanics to Flight Simulators – Robotic applications at the DLR Institute of System Dynamics and Control

Abstract: The talk focuses on the robotics activities at DLR’s institute of system dynamics and control. Starting with a general overview on DLR’s and the institutes major research fields, activities as industrial robot research, robot-based flight simulators and testbeds as the DLR Terramechanics Rover Locomotion Lab (TROLL) are presented. Also a short insight into recent space robotics activities as InSight/HP3 Mole and the experimental SCOUT rover will be given.

Bio: Dr. Tobias Bellmann is working as a scientist at the German Aerospace Center (DLR) since 2007. After finishing his PhD in the field robot path-planning, he became head of a workgroup with focus on the topics motion simulation and virtual reality, working on methods for modelling, simulating and visualizing mechatronic systems.

Since 2016 he heads the newly created DLR Systems and Control Innovation Lab (SCIL), commencing research on digitalization projects as the Digital Product/Twin together with various industry partners.

Slides: DLR Institute of System Dynamics and Control + Robotics + SCIL.PDF, and DLR Institute of System Dynamics and Control V4.pdf.

Weds 11-Dec-2019 4p

Title: Estimation of Dynamic Systems under Arbitrary Unknown Inputs

Abstract: The topic of estimation of dynamic systems under arbitrary unknown inputs, also called unknown input decoupled estimation, has received much attention in the last few decades. This is due to its vast applications, especially in fault tolerant estimation/control, security of cyber-physical systems, advanced vehicle applications, etc. In spite of the available extensive literature, there are very stringent requirements that limit the applicability of existing methods. Especially, most results rely on the following three restricted assumptions: (a) for unbiased and minimum variance estimation of the state/unknown input, the initial guess of the state has to be unbiased; (b) for filter existence and stability, the system needs to satisfy the so-called strong detectability criteria (this result was pioneered by Malo Hautus, who is the co-inventor of the well-known PBH lemma, i.e., the observability lemma in control); (c) for the optimal filter design, the noise covariances and the disturbance shaping structures are assumed to be known exactly. In this seminar, we will report our recent work, directly motivated by the above gaps. Especially, we will discuss (almost) complete solutions resolving issues (a) and (b), and partial solutions towards issue (c).

Bio: He Kong was born and grew up in the city of Heze, Shandong Province, China. He received the Bachelor’s, Master’s and PhD degrees from China University of Mining and Technology, Harbin Institute of Technology, and the University of Newcastle, Australia, respectively. Earlier on, his research was mostly influenced by Guang-Ren Duan and Bin Zhou (at Harbin), Graham Goodwin and Maria Seron (at Newcastle), amongst others.  He is currently a research fellow at ACFR’s agricultural robotics team. His research interests include estimation and inference of cyber-physical systems, moving horizon estimation/control, field robotics, machine learning, and signal processing applications in agriculture, etc. Recent years' exposure to field robotics at ACFR has somewhat changed his mindset: while he still has some passion for developing theoretical methods, he has become equally or more interested to verify them in the real world, on the hardware, or at least using real data.

Thurs 5-Dec-2019 4pMina Henein and Jun Zhang, ANU

Title: Robust Object-aware SLAM for Dynamic Scene Understanding

Abstract: The static world assumption is standard in most simultaneous localisation and mapping (SLAM) algorithms. Increased deployment of autonomous systems to unstructured dynamic environments is driving a need to identify moving objects and estimate their velocity in real-time. Most existing SLAM based approaches rely on a database of 3D models of objects or impose significant motion constraints. In this paper, we propose a new feature-based, model-free, object-aware dynamic SLAM algorithm that exploits semantic segmentation to allow estimation of motion of rigid objects in a scene without the need to estimate the object poses or have any prior knowledge of their 3D models. The algorithm generates a map of dynamic and static structure and has the ability to extract velocities of rigid moving objects in the scene. Its performance is demonstrated on simulated, synthetic and real-world datasets.

Mina Henein:

Mina is a PhD candidate at the Australian National University, and the Australian Centre of Excellence for Robotic Vision working on SLAM in dynamic environments. He is doing research under the supervision of Viorela Ila and Robert Mahony. His research interests include graph-based SLAM, dynamic SLAM and object SLAM besides kinematics and optimization techniques. Mina received his B.Sc. in Engineering and Materials Science with Honours majoring in Mechatronics from the German University in Cairo (GUC), Egypt in 2012. He then worked in the business sector for a multinational FMCG for one year as a Near-East demand manager before pursuing his masters in Advanced Robotics. He received a double M.Sc. degree; European Masters of Advanced Robotics (EMARo) from Universita degli Studi di Genova, Italy and Ecole Centrale de Nantes, France. Throughout his career, he worked as a visiting research assistant at the Italian Institute of Technology (IIT) under the supervision of Roy Featherstone and at the Autonomous Systems Lab (ASL) at ETH Zurich under the supervision of Peter Fankhauser, Marco Hutter and Roland Siegwart where he carried out his masters thesis.

Jun Zhang:

Jun Zhang is a PhD student of ARC Centre of Excellence for Robotic Vision, in College of Engineering and Computer Science, Australian National University. Jun received ME and BE degrees in the School of Aeronautics of Northwestern Polytechnical University, China. During the Master's period, Jun spent one and half year at the Institute of Computer Science and Technology, Peking University as a visiting researcher. His research interests include visual SLAM in non-static environment, scene flow estimation and multi-model fitting.

Tues 3-Dec-2019 3p

Gert Kootstra, Wageningen University & Research, Netherlands

Speaker: Gert Kootstra

Cancelled due to flight changes, might or might not happen later in the week

Weds 27-Nov-2019 4pWanli Ouyang, School of Electrical & Information Engineering

Title: Exploring Deep Structures in Computer Vision tasks

Abstract: Structure in data provide rich information that helps to reduce the complexity and improves the effectiveness of a model. In this talk, an introduction will be given on the recent progress in using deep learning as a tool for modeling the structure in visual data. We show that observation in our problem are useful in modeling the structure of deep model and help to improve the effectiveness of deep models for many vision problems.

Bio: Wanli Ouyang received the PhD degree in the Department of Electronic Engineering, The Chinese University of Hong Kong. He is now a senior lecturer at the University of Sydney. His research interests include image processing, computer vision and pattern recognition. He is the first author of 7 papers on TPAMI and IJCV. He received the best reviewer award of ICCV. He serves as the guest editor for IJCV, demo chair for ICCV 2019. He has been the reviewer of many top journals and conferences such as IEEE TPAMI, TIP, IJCV, SIGGRAPH, CVPR, and ICCV. He is a senior member of the IEEE.

Tues 26-Nov-2019 2pThomas Schön, Uppsala University

Title: Sequential Monte Carlo and deep regression

Abstract: This talk has two (for now) loosely connected parts: In the first part we aim to provide intuition for the key mechanisms underlying the sequential Monte Carlo (SMC) method (including the popular particle filters and smoothers). SMC provide approximate solutions to integration problems where there is a sequential structure present. The classical example of such a structure is offered by nonlinear dynamical systems, but we stress that SMC is significantly more general than most of us first thought. We will hint at a few ways in which SMC fits into the machine learning toolbox and mention a few interesting avenues for research. In the second part we develop a new approach to deep regression. While deep learning-based classification is generally addressed using standardized approaches, a wide variety of techniques are employed when it comes to regression. We have developed a new and general deep regression method with a clear probabilistic interpretation. We obtain good performance on several computer vision regression tasks (including a new state-of-the-art result on visual tracking). The loose connection lies in the use of the Monte Carlo idea in both topics. We do believe that the connection between the two seemingly disparate topics will be strengthened over the coming years.

Bio: Thomas B. Schön is Professor of the Chair of Automatic Control in the Department of Information Technology at Uppsala University. He received the PhD degree in Automatic Control in Feb. 2006, the MSc degree in Applied Physics and Electrical Engineering in Sep. 2001,  the BSc degree in Business Administration and Economics in Jan. 2001, all from Linköping University. He has held visiting positions with the University of Cambridge (UK), the University of Newcastle (Australia) and Universidad Técnica Federico Santa María (Valparaíso, Chile). In 2018, he was elected to The Royal Swedish Academy of Engineering Sciences (IVA) and The Royal Society of Sciences at Uppsala. He received the Tage Erlander prize for natural sciences and technology in 2017 and the Arnberg prize in 2016, both awarded by the Royal Swedish Academy of Sciences (KVA). He was awarded the Automatica Best Paper Prize in 2014, and in 2013 he received the best PhD thesis award by The European Association for Signal Processing. He received the best teacher award at the Institute of Technology, Linköping University in 2009. He is a Senior member of the IEEE and a fellow of the ELLIS society.

Schön has a broad interest in developing new algorithms and mathematical models capable of learning from data. His  main scientific field is Machine Learning, but he also regularly publishes in other fields such as Statistics, Automatic Control, Signal Processing and Computer Vision. He pursues both basic research and applied research, where the latter is typically carried out in collaboration with industry or applied research groups.

Mon 18-Nov-2019, 11amRami Khushaba, BuildingIQ & UTS

Title: Improved Electromyogram (EMG) Pattern Recognition for Multifunction Prosthesis Control

Abstract: Myoelectric control employs pattern recognition (PR) systems decipher the content of the Electromyogram (EMG) signals from the remaining muscles in the amputees stump to recover lost functionality by controlling powered prosthetics. Limb prostheses are essential for maintaining personal independence and supporting effective inclusion in society. However, due to their poor control, imposed by the limited accuracy of hand movement recognition in clinical settings, the EMG-driven prostheses are not widely acceptable. This is attributed to the big gap between systems developed in labs using ideal settings on intact-limbed subjects and those that are suitable for online recognition on amputees. Such a gap is imposed by factors like the lack of intuitive control, poor system reliability and the lack of robustness against practical problems like limb position change, electrodes shift, varying force levels, and EMG signal non-stationarity. Previous research has shown that the success of the EMG PR systems mainly depends on the quality of the extracted features, as they are of direct impact on clinical acceptance. A huge effort has been made by several research groups to bridge the gap between the lab settings and clinical implementations. Nonetheless, despite much advancement, there are still considerable challenges in applying research outcomes to a clinically viable implementation. A number of researchers suggested that EMG alone might be inadequate for reliable control and multi-modal sensory data is needed to complement the EMG features, e.g., accelerometers, gyroscopes, magnetometers, near-infrared spectroscopy and ultrasound images. An alternative approach considers different feature extraction methods like deep neural networks mixed with high-density (HD) EMGs. In this presentation, we shed the light on some of the latest developments in the field and how the performance of properly engineered temporal- spatial feature extraction algorithms can approximate that of deep learning methods at a much less computational cost while maintaining the accuracy.

Bio: Dr. Rami Khushaba, received his PhD degree in human robot interaction, with a specific focus on signal processing and machine learning for controlling powered prosthetics for amputees (UTS, 2010). A major goal of his research is to develop clinically realizable and robust myoelectric control systems that can be made available to persons with limb loss. He has a significant number of publications/contributions in the field of myoelectric control. He previously held several positions, including lecturing and a postdoctoral fellow at UTS, working in several projects including, exoskeletons and prosthesis control, driver drowsiness/ fatigue detection, and consumer neuroscience and Neuromarketing research. He joined ResMed, the Australian/international leader in medical devices for sleep disordered breathing detection and treatment, for nearly 5 years working on algorithmic interventions for noncontact Doppler radar detection of SDB, heart failure and COPD symptoms deteriorations (with publication in prestigious journals and 3 patents). He’s been with BuildingIQ since 2017, and is currently leading the data science team, with a focus on HVAC energy optimization and machine learning control. He recently patented a new causal inference engine and pioneered its use for fault detection and tracking in HVAC systems with up to 20,000 IoT sensors readings per building.

Fri 15-Nov-2019 10a-noon

Ayoung Kim, KAIST

Rob Mahoney, ANU

Speaker: Ayoung Kim, KAIST

Title: Enhancing robotic perception in the underwater environment

Abstract: This talk focuses on two major underwater perceptual sensors, namely optical and sonar image. First part of this talk deals with the underwater optical image enhancement introducing model-based, non-model-based and learning-based image detail enhancement. The second part of this talk will introduce recent trials in deep learning methods for sonar images, presenting strategy and its results by training from a simulator and detecting objects in real sonar images.

_________

Speaker: Professor Robert MAHONY, ANU

Title: An Equivariant Perspective on Spatial Awareness

Abstract: This talks considers a novel approach to spatial awareness for robotic systems based on exploiting properties of symmetry. I show how the classical SLAM problem can be formulated as equivariant kinematics on a homogeneous space under action from a novel semi-direct Lie-group that I term the SLAM group. Using this framework, I derive a nonlinear observer (on the lifted kinematics defined on the SLAM group) that integrates pose and landmark estimation in the same mathematical framework without resorting to linearisation. This approach provides a fully nonlinear state estimator with global convergence properties that is highly robust to noise and offers low-complexity real-time SLAM implementation for consumer robotic systems.

Bio: Robert Mahony is a Professor in the Research School of Engineering at the Australian National University.  He received his BSc in 1989 (applied mathematics and geology) and his PhD in 1995 (systems engineering) both from the Australian National University.  He is a fellow of the IEEE and was president of the Australian Robotics Association from 2008-2011. His research interests are in non-linear systems theory with applications in robotics and computer vision. He is known for his work in aerial robotics, geometric observer design, matrix subspace optimisation and image based visual servo control.

Weds 13-Nov-2019 4p

Intro to INCUBATE

Kate Maguire-Rosier

Title: Moving with robots: A review of dance performances involving human and robotic performers

Abstract: This presentation focusses on artistic projects involving robots, specifically in the context of live dance performance. Following Naoko’s talk last week on her collaborative motion project, Kate’s talk departs from the aesthetic facet of human-robot relationships and presents a review of dance performances where humans and robots collaborate choreographically on stage. She presents categories, identified together with Naoko, in which these performances sit, illustrating each category with a video-recording excerpt. This presentation closes with the following questions: “How might this research be used to inform, shape and develop a human-robot collaboration experiment?” and; “How do artistic approaches contribute to robotics research?”

Bio: Dr Kate Maguire-Rosier is a Research Assistant to Research Fellow, Dr Naoko Abe, at the Sydney Institute for Robotics and Intelligent Systems. She is a Dance, Theatre and Performance scholar specialising in the subfields of disability performance and digital performance. In 2018, she obtained her PhD from Macquarie University for her ethnographic study, “Performances of ‘Care’: Dance Theatre Practice by and with Australian Artists with Disability”. From 2018-2022, she is Co-Convenor of the International Federation for Theatre Research’s “Performance and Disability” Working Group. Kate also works as Projects and Programs Manager at Ausdance NSW, the peak body for the NSW dance sector.

Weds 6-Nov-2019 4p

Title: Robotics and society: An overview of current research projects

Abstract: The talk will present the current projects undertaken under the “Robotics and Society” theme at the Sydney Institute for Robotics and Intelligent Systems. The Robotics and Society theme aims to explore the human-machine relationship and understand in a holistic way the potential impact of technologies on human behaviour and society. The talk will present three multidisciplinary projects involving collaboration across the University of Sydney and with international researchers. The presentation will pose the following question for discussion with the audience: “Why does robotics need multidisciplinary collaboration?”

Bio: Dr Naoko Abe is a Research Fellow at the Sydney Institute for Robotics and Intelligent Systems. She is a sociologist, specialising in social interaction and human movement, with a research focus in Robotics and Urbanism. She obtained a PhD in Sociology from Ecole des Hautes Etudes en Sciences Sociales (EHESS, Paris) and a teaching certificate of Kinetography Laban from Paris Conservatory (CNSMDP). In 2015, Naoko Abe was a Postdoctoral Fellow at the Laboratory for Analysis and Architecture of Systems – the French National Centre for Scientific Research (LAAS-CNRS) in Toulouse. In 2016–2017, she was a Renault-Junior International Research Fellow at the Centre for French-Japanese Advanced Studies of Paris (CEAFJP) coordinated by the EHESS France-Japan Foundation (FFJ). She participated in research funded by the French National Research Agency (ANR) as a Research Associate from November 2017 to June 2018.

Fri 01-Nov-2019 2pElizabeth Ratnam, ANU

Title: Creating a resilient carbon neutral electricity grid

Abstract: In recent years, a dramatic increase in electrical power generation from renewable energy sources has been observed in many countries. The grid-integration of customer-owned solar photovoltaics (PV) has been driven by government incentives and renewable energy rebates, including residential feed-in tariffs and the financial policy of net metering. However, new challenges arise in balancing the generation of electricity with variable demand at all times as traditional fossil fuel-fired generators are retired and replaced with intermittent renewable electricity sources.This presentation considers ways to integrate residential-scale battery storage co-located with solar PV, with a view of creating a resilient carbon neutral electricity grid. 

Bio: Dr Ratnam earned the BEng (Hons I) degree in Electrical Engineering in 2006, and the PhD degree in Electrical Engineering in 2016, from the University of Newcastle, Australia. She subsequently held postdoctoral research positions with the Center for Energy Research at the University of California San Diego, and at the University of California Berkeley in the California Institute for Energy and Environment. During 2001–2012 she held various positions at Ausgrid, a utility that operates one of the largest electricity distribution networks in Australia. Dr Ratnam currently holds a Future Engineering Research Leader (FERL) Fellowship from the Australian National University (ANU) and she joined the Research School of Engineering at ANU as a research fellow and lecturer in 2018. Her research interests are in developing new and revolutionary approaches to control distribution networks with a strong focus on creating a resiliant carbon neutral power grid. 

Weds 30-Oct-2019 4p

Teja Digumarti

Yi Sun

Teja Digumarti
Title
: Semantic Segmentation and Mapping in Natural Environments

Abstract: Research on 3D Reconstruction, Semantic Segmentation has made great progress over the past decade. However, a vast majority of this research is targeted towards human made environments like indoors and urban outdoors. These techniques do not translate directly into the natural world because characteristics of natural structures such as self-similar and repeating elements, occlusions, complex geometry and semi-rigid structures pose challenges to existing techniques. This talk will present a few ways to tackle these challenges, specifically for semantic segmentation and mapping, with the end goal being accurate reconstruction of natural structures like trees and corals. The talk will explore the use of deep learning and learning from simulation as useful tools in addressing these challenges.

Bio: Tejaswi Digumarti is a research associate at the Sydney Institute of Robotics and Intelligent Systems. His research interest is in taking robots into the natural world, with a focus on scene understanding, mapping and 3D reconstruction techniques. Having successfully defended his thesis, he is yet to obtain his Ph.D. degree from ETH Zurich, Switzerland. His Ph.D. work was in collaboration with Disney Research. Prior to that he obtained his Masters' degree in Robotics, Systems and Control from ETH Zurich, Switzerland and his Bachelors' degree in Electrical Engineering from IIT Jodhpur, India.

Yi Sun
Title: Soft robotic pad for manta ray inspired robotic swimmer

Abstract: The development of silicone-based soft pneumatic actuators (SPAs) has a history of more than two decades. However, the vast majority of the existing SPAs fall into one single appearance which is the finger-like one-dimensional (1D) body. Therefore, making a 2D SPA by adding a dimension to the 1D SPA can be the breakthrough in the format of the SPAs. Moreover the motion types can be expanded for SPAs due to the additional dimension. This presentation will talk about the development of 2D SPA called soft robotic pad (SRP) which is shaped into a flat 2D body and can be mechanically programmed to do a variety of surface morphing with a constant thickness. A novel and well-designed fabrication technique will be presented, followed by its application in a manta ray inspired robotic swimmer.

Bio: Yi Sun is a research fellow at the Sydney Institute for Robotics and Intelligent Systems. His research interest and focus are soft robotics, especially the soft material based soft pneumatic/hydraulic actuators and robots. He obtained his Ph.D. degree in National University of Singapore (NUS) under NUS Graduate School for Integrative Sciences and Engineering. Before that, he also had a one-year research experience in the Reconfigurable Robotics Lab at EPFL, where he started his research on soft robots.

Thurs 24-Oct-2019 2p

Ravi Garg, University of Adelaide

TitleSLAM for Learning and Learning for SLAM

Abstract: Decades of research in mutiview geometry have provided us with real-time dense monocular tracking and mapping systems which are capable of reconstructing high fidelity maps of the world around us. However, these geometric approaches to monocular reconstruction drastically differ from how humans perceive and interact with their environment. We learn from the experience of having seen large numbers of highly correlated scenes from multiple viewpoints, and use this prior knowledge for effective perception. I am going to discuss ways to leverage the incredible efficiency of artificial neural network to capture such knowledge and use it in existing monocular reconstruction and tracking pipelines for robust SLAM. Deploying supervised techniques for learning geometry is cumbersome and usually requires a large amount of annotated data, involving careful capture with calibrated sensors including LIDAR and IMUs. I will discuss how the basic principles of multi-view geometry and SLAM can be reused to train deep neural networks, to predict scene depth, normals, ego motion, and deformations with handheld or mounted commodity cameras alone.

BioRavi Garg is a Senior Research Associate with the Australian Centre for Visual Technologies at The University of Adelaide, and is an Associate Research Fellow with the Australian Centre for Robotic Vision. He is working with Prof Ian Reid on his Laureate Fellowship project "Lifelong Computer Vision Systems". Prior to joining University of Adelaide, he finished his PhD from Queen Mary University of London under the supervision of Prof Lourdes Agapito where he worked on Dense Motion Capture of Deformable Surfaces from Monocular Video.

His current research interest lies in building learnable systems with little or no supervision which can reason about scene geometry as well as semantics. He is exploring how far the visual geometry concepts can help current deep neural network frameworks in scene understanding. In particular, his research focuses on unsupervised learning for single view 3D reconstruction, visual tracking in monocular video and weakly or semi-supervised semantic reasoning in images or videos. He is also interested in building real-time, semantically rich robust monocular SLAM systems which can leverage deep learning.

Mon 21-Oct-2019 3pWill Reid, JPL

TitleActively Articulated Wheel-on-Limb Mobility for Traversing Europa Analogue Terrain

AbstractMobile, in-situ exploration of Europa’s rugged, icy surface holds the potential for enabling discovery across multiple geologic units outside the exhaust-contaminated landing zone. The spatial and compositional diversity of surface salts and organics are of significance to our understanding Europa’s history and biological potential. Our knowledge of Europa’s surface properties, both topographic and mechanical, is extremely limited. Furthermore, additional data will not become available prior to the arrival of the Europa Clipper spacecraft. If the exploration of Europa continues to be an area of high science-value, this work postulates that solutions to the challenges of mobility on uncertain, but likely challenging, surfaces should be developed now.

In this presentation, I discuss the development of a multi-modal locomotion system and the results of field trials performed on salt-evaporites and fractured glacial ice. Work was performed using the RoboSimian vehicle: a 32 degree-of-freedom, actively articulated mobility system. Three modes of mobility are compared: wheel rolling, push-rolling (inchworming) and wheel walking. Each mobility mode is designed to operate with articulated suspension whereby the normal load per wheel, body orientation, and available limb workspace are actively controlled. Each mode is presented individually alongside a discussion of its performance on terrain of varied slope and topographic roughness. Further, the utility of a multi-modal approach is presented, whereby vehicle immobilization was avoided during field trials through the selection of appropriate mobility modes as a function of terrain properties. Finally, the results of trials performed using a body-mounted sampling system and its ability to collect and process samples taken 10 cm beneath the surface are discussed.

BioWill Reid is a Robotics Technologist at NASA’s Jet Propulsion Laboratory. His research investigates wheel-on-limb mobility strategies for efficient robotic locomotion on the Ocean Worlds of Enceladus and Europa. Will received a PhD from the Australian Centre for Field Robotics at the University of Sydney and received bachelors’ degrees in Mechatronics Engineering and Computer Science from the University of Melbourne. His PhD thesis investigated system design, motion modelling and path planning for a high degree-of-freedom wheel-on-limb robot operating on a Martian analogue terrain.

Thurs 17-Oct-2019 1p

Title: Estimating noise covariance in filters using autocovariance least squares

Abstract: The extended Kalman filter is an extremely useful tool that is widely applied in robotics. In practice we often read the sensor noise covariance matrix off a datasheet and make some sort of educated (or not) guess about the process noise covariance. Several techniques exist for estimating these noise matrices in a principled way, but all have drawbacks around computation or accuracy, one of the most effective approaches is known as autocovariance least squares (ALS). In this talk I will be presenting the application of ALS to estimating these covariances in a visual servoing problem.

Bio: Jasper completed a bachelor of mechatronic engineering at Usyd in 2016 and is currently a PhD student with the ACFR focusing on active perception for manipulation. 

Thurs 10-Oct-2019 4p

Kunming Li (ITS, ACFR)

@Weiming Zhi (from Fabio Ramos's group)

Stuart Eiffert (Agri-robotics group, ACFR)

He Kong (organizer)

Title: Human/animal motion prediction and robotic path planning in dynamic environments

Abstract:  Fully autonomous and safe operations of intelligent platforms relies on an accurate understanding and prediction of the changing environment. This is especially true for scenarios such as campus, human-robot shared workspace, etc., where, humans behaviours change dynamically. As such, anticipation of human motion and planning paths accounting for such predictions become important yet challenging problems. The former topics are receiving increasingly more attention in recent years.

This is a focused session on human/animal motion prediction and robotic path planning in dynamic environments such as crowds or traffic. There will be three presenters, who are USYD PhD students. Each will make a 15 minutes presentation. After the three presentations, we will have Q&A. Presenters, talk titles, bio in order:

1. Kunming Li, Interactions between Autonomous Vehicles and Pedestrians

Kunming Li is in the first year of PhD in ITS team, ACFR. He graduated from ANU and worked as a research assistant in Data61 before. He is interested in computer vision and deep learning.  In the seminar, he will briefly talk about his phd projects and current research progress. His PhD project is to enable vehicles to have the ability to interact with pedestrians safely and efficiently. Currently, he is exploring to utilize GAN, which was recently proposed and widely used in many learning-based algorithms, to predict pedestrian trajectory.

2. Weiming Zhi, Kernel Trajectory Maps for Multi-Modal Probabilistic Motion Prediction

Weiming (William) Zhi is a PhD candidate with the School of Computer Science, supervised by Professor Fabio Ramos. He received his Bachelors of Engineering (Honours) at the University of Auckland, with a focus on mathematical optimisation and operations research. His honours thesis explores alternative market clearing mechanisms in energy markets. His current research interests lie in utilising machine learning methods to obtain robust solutions in robotics problems, with specific interests in probabilistic methods for motion prediction, and robust motion planning methods in dynamic environments.

3. Stuart Eiffert, Bridging the Gap between Prediction and Planning

Stuart Eiffert joined the ACFR in 2018 and is currently completing a PhD within the Agriculture group, supervised by Salah Sukkarieh. He has previously worked as a Research Associate at the Centre for Autonomous Systems at UTS and a research engineer at Laing O'Rourkes R&D group, developing distributed human-robot interaction systems for use in construction and public transport. His current work focuses on robotic motion planning in dynamic environments, where he is extending trajectory prediction models to learn the response of a crowd to a robot's planned action.

Weds 02-Oct-2019 4p

Title: Rio Tinto Centre for Mine Automation

Abstract: The Rio Tinto Centre for Mine Automation (RTCMA) is an industry funded centre focussed on applying state of the art techniques in data fusion, machine learning, automation, perception and optimisation to real world mining problems in order to improve Rio Tinto's commercial operations. This talk will provide an overview of the centre's current activities and research highlights since its inception in 2007, across projects including picture compilation for mining systems, automation of production drilling and light vehicles, orebody modelling and estimation, and vehicle dispatching.

Bio: Andrew Hill is the director of the Rio Tinto Centre for Mine Automation (RTCMA) at the Australian Centre of Field Robotics (ACFR). He received his PhD from ACFR, USyd in 2012, working on planning and multi-agent mapping for an autonomous ground vehicle (Argo) project. He was employed as a Research Fellow in RTCMA on an autonomous ground vehicle project, and from 2014 led the Asset Optimisation theme of RTCMA working on fleet dispatch algorithms. His research interests include mobile robotics and autonomous systems in outdoor environments, mapping and trajectory planning algorithms, and optimisation of scheduling and task assignment in multi-agent systems.

Weds 25-Sept-2019 1p

Special session: Martin Tomitsch, Jathan Sadowski, Stewart Worrall

Title: Designing interactions between human and autonomous vehicles

Abstract: This presentation reports on a multidisciplinary, collaborative effort that investigates the complex problem of how humans can safely interact with autonomous vehicles. The ITS group at the Australian Centre for Field Robotics have been working towards developing small electric vehicles that can provide last mile transport in an urban environment. It became apparent that the technology behind this endeavour was missing an understanding of the human aspect of the task - how the vehicle can communicate its intention to the surrounding pedestrians such that the people outside the vehicle can understand and trust the behaviour of the vehicle, and how the vehicle can improve the safety and efficiency of the interaction. The expertise of the Design Lab and the Smart Urbanism Lab within the School of Architecture, Design and Planning has been crucial in incorporating people into the design process to understand how technology can be used to shape these interactions. 

The presentation will introduce the project, but also the experiences of working in a multidisciplinary project with researchers from different backgrounds. With funding from CRIS/SIRIS, we have built a team of researchers and PhD students working in this area, and have generated joint publications and an ARC discovery proposal that is currently under review.

Bio(s)

Dr Jathan Sadowski is a postdoctoral research fellow in smart cities in the School of Architecture, Planning, and Design at the University of Sydney. His research focuses on two broad areas: The process of making smart urbanism into a reality, from imagination to implementation. The political economy of designing and using data-driven, networked, and automated technology. 

Associate Professor Martin Tomitsch is Chair of Design and Director of Innovation at the University of Sydney. His research focuses on the role of design for shaping the interactions between people and technology, and how digital technology and design processes can improve life in cities. 

Dr Stewart Worrall is a research fellow in the intelligent transport systems (ITS) group of the Australian Centre for Field Robotics. His research is focused on enabling vehicle automation by understanding the complex interactions that occur in an urban road environment. This includes exploring challenges in localisation, obstacle detection/avoidance and path planning and prediction. He is also responsible for the operation of the groups autonomous electric vehicles.

Mon 23-Sept-2019 3:00pSandeep Manjana, McGillTitle: Reinforcement Learning for Efficient Robotic Sampling in Outdoor Environments

Abstract: In this talk I present my research on using techniques from reinforcement learning for efficiently sampling scientific data in large-scale outdoor environments. The techniques presented generate paths to efficiently measure and then mathematically model a scalar field by performing non-uniform measurements in a given region of interest. In particular, the class of scalar fields considered are some physical or virtual parameters that vary spatially, such as depth of the sea floor or algae blooms or suspended particles in air. As the measurements are collected at each sampling location, they are used to compute an estimate of the large-scale variation of the phenomenon of interest. I present techniques to compute a sampling path that minimizes the expected time to accurately model the phenomenon of interest by visiting high information regions (hotspots) using non-myopic path generation based on reinforcement learning. I will briefly talk about the platforms built and used for evaluating these sampling algorithms in real world applications and also mention the challenges involved in conducting field experiments in harsh outdoor environments.

Bio: Sandeep Manjanna is a PhD candidate at Mobile Robotics Lab (MRL) in the School of Computer Science at McGill University, Montreal, Canada. His area of research is field robotics with a focus on designing planning algorithms for autonomous vehicles to sample and map challenging environments. His current research includes adaptive sampling and surveying of marine and freshwater environments to reconstruct static field maps of physical phenomena such as, dissolved oxygen, plankton density, and turbidity. The focus of his research is on building representative field maps by collecting samples in an efficient manner so that large-scale maps can be generated with a limited battery life of the robotic vehicles. He has developed algorithms to persistently survey the coral reefs by autonomously capturing the images of the corals and by sampling the water quality measurements over the reefs to assess their health. He is interested in studying and understanding large-scale marine ecosystems and the effects of environmental changes on these ecosystems.
Fri 20-Sept-2019 3:30pBen Morrell, NASA JPL

Title: Robotics NASA’s Jet Propulsion Laboratory, and the DARPA Subterranean Challenge. 

Abstract: NASA’s Jet Propulsion Laboratory is famous for flagship space exploration missions such as the Mars rovers, the Cassini mission to Saturn and the Voyager spacecraft journeying beyond the solar system. While these activities take up a large portion of the laboratory, there is also a substantial proportion of the lab devoted to robotics research. This talk will give an overview of some of the robotics activities at JPL, both flight and research. More details will then be presented on the work being done as part of the DARPA Subterranean Challenge, an international competition requiring rapid multi-robot exploration of unknown underground environments, with substantial perceptual, mobility and communication challenges. There will be a particular focus on the perception components on the challenge: global localization, mapping, locating objects of interest and state estimation.

Bio: Benjamin Morrell is a Robotics Technologist at the NASA Jet Propulsion Laboratory (JPL), California Institute of Technology, working on autonomous navigation technology for robots flying, exploring underground and in space. He is the perception lead on team CoSTAR, a collaboration between JPL, MIT and Caltech competing in the DARPA Subterranean Challenge. 

Previously he worked at JPL as a Post-Doctoral researcher after completing an internship there during his PhD which included work on high speed autonomous flight of quadrotors. He completed his PhD at the University of Sydney in the School of Aerospace Mechanical and Mechatronic Engineering, the same institution where he completed his Bachelor’s of Aeronautical (Space) Engineering in 2013. Ben’s PhD thesis was focusing on autonomous navigation for aerial and space-based robots, considering localization, mapping, trajectory planning and control. 

Ben is a motivated actor in trying to connect Australian students, researchers and entrepreneurs with opportunities, expertise and experience in the US space industry, and has been heavily involved in outreach activities while in Australia (AIAA Sydney Section, and Zero Robotics). 

19-Sept-2019

Jack Umenberger

Title: Learning robust LQ-controllers using application oriented exploration

Abstract: This talk concerns the problem of learning robust linear quadratic (LQ) controllers, when the dynamics of the linear system are unknown. First, we propose a robust control synthesis method to minimize the worst-case LQ cost, with high probability, given empirical observations of the system. Next, we propose an approximate dual controller that simultaneously regulates the system and reduces model uncertainty. The objective of the dual controller is to minimize the worst-case cost attained by a new robust controller, synthesized with the reduced model uncertainty. The dual controller is subject to an exploration budget; i.e., a limit on the allowable worst-case cost incurred during exploration, given our current understanding of the system uncertainty. Numerical experiments demonstrate superior performance of the proposed robust LQ-controller over existing methods. Moreover, the dual control strategy is observed to significantly outperform common epsilon-greedy random exploration strategies.
Thurs 12-Sept-2019 1p

Title: How now lame cow? Automatic lameness detection with 3d sensors

Abstract: Lameness in dairy cows is a prevalent health issue impacting both animal welfare and economic performance. Despite the economic and welfare cost of lameness, lameness prevalence has been shown to be severely underestimated. This is partially due to the time and expertise required to systematically recognise and score lame cows. This infrequent manual process often suffers from subjectivity and low consistency. Automatic lameness detection can potentially provide an objective, consistent lameness assessment at a higher temporal resolution, while better distinguishing lameness severity levels.

Bio: John Gardenier completed a BSc in 2010, and a MSc in 2014, both in Aerospace Engineering at Delft University of Technology in the Netherlands, and specialising in control systems during his MSc. During internships at Nissan in Japan and at a research laboratory in San Diego John assisted in developing human-machine interfaces for drive-by-wire technology and driving simulators for medical diagnostics. In 2016, after working in the Netherlands in the fields of automation, robotics, and flight simulation, he commenced a PhD at the Australian Centre for Field Robotics (ACFR) at The University of Sydney, Australia. John’s PhD thesis is a joint project between the ACFR and the University's dairy research group with the aim of developing advanced perception for livestock agriculture, specifically automatic lameness detection in dairy cattle. This will utilise the latest advances in two of John’s key interests, machine vision and deep learning, in order to solve a real-world problem facing the dairy industry.

Fri 06-Sept-2019
2p

Thierry Peynot, QUT

Title: Resilient Perception for Robotics in Challenging Environmental Conditions: In a mine or a knee, in the presence of smoke, dust or refractive objects

Abstract: In this seminar we will discuss different methods and strategies to endow robots with resilient perception in a variety of particularly challenging conditions. The case studies we will consider include: visual-based localisation and mapping in underground mines and in minimally-invasive orthopedic surgery, multi-sensor localisation and obstacle detection in the presence of smoke, fog or airborne dust, and structure from motion in the presence of refractive objects.

Bio: Thierry Peynot is an Associate Professor and Mining3 Chair in Mining Robotics in the Robotics and Autonomous Systems Discipline at the Queensland University of Technology (QUT), Australia. He is also Program Director (Automation) at Mining3 in Brisbane. Between December 2007 and February 2014 he was a Research Fellow at the Australian Centre for Field Robotics (ACFR), The University of Sydney.

He received his PhD from LAAS-CNRS, University of Toulouse (INPT), France. From 2005 to 2007 he was also an Associate Lecturer at the University of Toulouse. In 2005 he also worked at NASA Ames Research Centre in California.

His current research interests focus on mobile robotics and autonomous systems in challenging environments, and include: resilient perception, multimodal sensing, sensor data fusion, robotic vision, mapping and localisation, and terrain traversability estimation for unmanned ground vehicles.

29-Aug-2019

Title: Cooperative Perception Based on V2X Network in Intelligent Transportation Systems

Abstract: The upcoming widespread deployment of DSRC technology will enable the sharing of multi-modal sensory information among intelligent Road Side Units (IRSU) infrastructure and smart vehicles fitted with communication hardware and advanced perception capabilities. The project focuses on the development of a general framework for cooperative data fusion to integrate data coming from different sources with their own uncertainties. These algorithms will be used to propagate estimates of position, context and associated risk for all road users and vehicles in proximity. This information will be critical to extend the sensing capabilities of smart vehicles beyond the visual line of sight, which in complex traffic scenarios can be heavily restricted.

Bio: Dr. Mao Shan is currently a Research Fellow at the ACFR, The University of Sydney. He received his Ph.D. degree from The University of Sydney in 2014. He was a Research Associate at the ACFR from 2014 to 2016, and a Research Fellow at Nanyang Technological University, Singapore, from 2016 to 2017. His research interests include autonomous systems, localization and tracking algorithms and applications.

22-Aug-2019

James Ward [X], Nicky Vera, Sam Richards

Title: How Teenagers Built a Robot in Six Weeks

Abstract: The Drop Bears is a team of high school students and adult mentors that compete in the international FIRST Robotics Competition. The Drop Bears have been hosted by the ACFR for the past six years. At the beginning of each year, teams around the world are given the rules of a new game and have six weeks to build a robot from scratch. Two of the students from the team will talk about the technical challenges they faced in this years game, how they solved them, and what went wrong along the way (a lot!!). Ultimately the robot won the Autonomous Award in the Sydney competition, and made it to the semi-finals in Calgary, Canada.

Bios: James Ward is a Research Fellow at the ACFR. He has been the Head Mentor for the Drop Bears since the team moved to the ACFR in 2013. Despite being nicknamed by the team "The Grumpy Old One" he has never had more fun than his time with The Drop Bears. Nicky Vera is in Year 11 at Sydney Secondary College, Blackwattle Bay Campus. Nicky has been with The Drop Bears for 2 years, focusing mainly on mechanical design and fabrication. He has been the team lead on a number of mechanisms on the team's robots. In 2019 he was also a member of the Drive Team - the students operating the robot in the high pressure environment of the matches. If he could be any animal, he would choose to be an otter. Sam Richards is in Year 10 at The Shore School. He has also been with the team for 2 years. Spending his time on software and controls, he was the lead on the computer vision system for the 2019 robot. His favourite colour is puce, and he loves Siamese fighting fish.

15-Aug-2019

Tobi Kaupp

Title: Robotics in Context of Industry 4.0 - Applied Research and Teaching Driven by German Industry Demands

Abstract: In this talk, I will report on the activities in my new role as a Research Professor for Digital Production and Robotics at the University of Applied Sciences Würzburg-Schweinfurt, Germany. Firstly, I will explain the role and research environment of a University of Applied Sciences. Secondly, I will discuss the requirements of local industry to automate production and assembly in smart factories using Collaborative Robots (CoBots) and Automated Guided Vehicles (AGVs). Thirdly, I will present our plans to introduce an international B.Eng. in Robotics degree program with its content driven by German industry demands.

Bio: Dr. Tobias Kaupp is currently a Professor for Digital Production and Robotics at the University of Applied Sciences Würzburg-Schweinfurt, Germany. Prior to taking this position in November 2018, Tobias was acting as a director of Sydney-based mobile robotics company Marathon Robotics for 10 years. He co-founded the company in 2007 with two other PhD graduates from the Australian Centre for Field Robotics (ACFR) at the University of Sydney. Tobias completed his PhD thesis at the ACFR on the topic of Collaborative Human-Robot Interaction in 2008. Tobias also holds a Diploma in Applied Physics (2001) and a M.Sc. in Mechatronics (2003), both awarded by the University of Applied Sciences Ravensburg-Weingarten, Germany.

08-Aug-2019

Title: A Contraction Framework for LPV Control

Abstract: The Linear Parameter-Varying (LPV) control technique has widespread applications in nonlinear systems with a large operating range, e.g., flight control, robotics, chemical process, and etc. The existing LPV approaches (both local and global frameworks) only ensure either local stability to changing setpoints or global stability to a certain equilibrium. This talk focuses on a rigorous and unified LPV framework to derive global equilibrium-independent stability by using contraction theory. 

Bio: Ruigang Wang received his B. E. degree from Beihang University, M. E. degree from Shanghai Jiaotong University, and Ph.D. degree from University of New South Wales, Sydney. He joined the ACFR in 2018 and works as a research associate focusing on control and optimization for nonlinear systems.

01-Aug-2019
special time: 3p

Suchet Bargoti, Abyss Solutions

Title: Computer Vision and Machine Learning for Offshore Inspections

Abstract: Abyss Solutions is a technology startup that is on the cutting edge of infrastructure asset inspections. We design and operate custom data gathering robots and use machine learning and advanced data analytics to interpret the torrent of data that we collect. Operations in offshore Oil & Gas platforms have high inherent production, safety and operational risks due to the challenging operational and environmental conditions. Regular asset integrity inspections and maintenance procedures are in place to help reduce these risks and improve safety. However, such procedures are typically conducted by hand, making them expensive, sparse, prone to subjective biases and time consuming. For examples, divers are sent subsea to visually evaluate and report on degradation on support and operational structures. Furthermore, corrosion experts continuously walk through and examine the platforms to make timely decisions on managing accelerate degradation in the corrosive environment. 
This talk presents an overview of advances and developments in computer vision and machine learning at Abyss Solutions, which are re-defining how inspections and assessment of such assets can be conducted in an accurate, comprehensive, cost effective and timely manner. 


Bio: Suchet Bargoti has a background in field robotics, with a particular interest in enabling machines to perceive the world around and make autonomous data-driven decisions. Suchet’s background is in integrating machine learning and computer vision for robotics, developed through a PhD at the Australian Centre for Field Robotics (ACFR). His research has impacted many applications, such as agricultural robotics, urban automation, underwater mapping and space exploration. At Abyss Solutions he leads the technical vision of the rapidly growing business in automating condition assessments of remote offshore and subsea assets.

25-Jul-2019

Title: Automated Surveys of Mobile Reef Fauna using AUV Collected Imagery

Abstract: The proposed method uses a deep learning based image segmentation and classification approach to identify and locate individual animals in AUV survey images, and generate an estimate of their spatial distributions using the vehicle's navigation information and a 3D habitat reconstruction generated from the imagery. This allows for observation of the habitat use of reef fauna across an entire reef with low levels of user input.

Bio: Nader completed a B.Eng/B.Sc in environmental engineering and marine science at the University of Western Australia in 2015. He joined the ACFR in 2016 and is undertaking a PhD with the marine robotics group focusing on observational techniques for reef fauna and their habitats. 

Slides: https://docs.google.com/presentation/d/1i5T7Hm1Ksdsilzfb8OsfXMl3dpo7SjE6zAUOALO76kM/edit?usp=sharing

18-Jul-2019

TitlePath Planning in Dynamic Environments with Consideration of Social Response.

AbstractRobotic path planning in dynamic environments such as crowds, traffic, or herds, requires the ability to predict future trajectories of surrounding individuals in order to plan a path accordingly. However, this planned path may now impact how these individuals respond, meaning that the original prediction may no longer be valid. Current methods generally do not have a way of considering the response of individuals when creating this plan. 
To overcome this problem, we propose a generative model of agent motion, using both the observed past motion of an agent as well as a planned future action of the robot to predict the likely future motion of the agent. We show that this learnt model of response can be used in a heuristic path planner to iteratively optimise a given objective function. This allows for improved dynamic obstacle avoidance, as well as tasks that aim to control the future states of individuals, including livestock herding or avoiding undesired states in nearby individuals such as excessive braking of vehicles.

BioStuart completed a B.E. degree in mechatronic engineering at the University of Technology Sydney in 2014. He has worked as a Research Associate for the Centre for Autonomous Systems at UTS, as well as a Robotics Engineer within Laing O’Rourke’s research and development group Engineering Excellence, where he applied computer vision systems to pedestrian detection in the construction industry. He joined the ACFR in 2018 and is currently undertaking a Ph.D. within the agriculture group, focusing on active perception in dynamic environments.

11-Jul-2019

Title: Distributed Identification of Contracting and Monotone Network Dynamics

Abstract: Large-scale monotone dynamic networks appear frequently in applications such as transport networks, power scheduling, thermal systems and modelling of epidemics. The identification of such a model from data, however, remains challenging due to the scale and feedback implied by their network structure. We introduce a convex set of monotone models with stability constraints that are amenable to distributed optimization.  Exploiting the properties of this model class, we develop a distributed algorithm that fits non-linear polynomial models that guarantee network stability through contraction. The model exists as information spread over the set of nodes, and the algorithm at no point requires the network structure or data to be collected at a single location, opening up the possibility for identification of extremely high dimensional non-linear models. The algorithm is demonstrated on simple linear examples and the identification of simulated non-linear traffic networks.

04-Jul-2019Jacob Mackay

Title: Radar Based Active Perception

Abstract: Radar offers another perception modality in addition to lidar and vision for ground-based robotics, however it is still not widely embraced. This is partly due to the challenges in actuation of the sensor, as well as the signal processing techniques required to extract useful data. In this talk, we present an approach to optimising sensor actuation based on coverage requirements, in addition to  a method for creating a 3D environmental reconstruction from 2D radar data. Included are initial results from simulation and field trials.

Bio: Jacob Mackay completed a B.E. degree in mechatronic engineering and a B.Sc. degree in nanoscience and technology at Sydney University in 2016. He joined the ACFR in late 2017 and is currently undertaking a Ph.D. in radar based active perception.

27-Jun-2019ICRA WrapupA discussion of key papers and ideas emerging at ICRA 2019. Intro slides: https://docs.google.com/presentation/d/18IdykwFzGFmr8rEMgjqlDFlEPeJSeHtB8PgQtUM3PAI/edit?usp=sharing 
20-Jun-2019Title: Forest Inventory from High Resolution Airborne LiDAR

Abstract: An autonomous pipeline that uses machine learning and computer vision to build forest inventories from high resolution airborne LiDAR is presented. The pipeline comprises an object detection framework to detect individual trees, a 3D fully convolutional autoencoder to segment their stems and an optimisation approach to fit shape models to the extracted stems - from which inventory metrics can be inferred. The proposed pipeline was validated using airborne LiDAR collected over two commercial forest plantations. 
13-Jun-2019Fletcher H. Fan

Title: Maximum entropy imitation learning using variational inference

Abstract: Imitation learning (IL) is an emerging technique for programming robots by demonstration. Current state-of-the-art IL algorithms require a significant amount of interaction with the environment to learn a policy and/or reward function, which limits their applicability to real-world robotics tasks. In this talk , we present a new algorithm for IL that can be thought of as probabilistic behavioural cloning. It only requires demonstration data for training, and jointly learns a policy and reward function. We include some preliminary experimental results in a range of simulated environments.

03-Jun-2019John Lygeros

Title: Coordination and control in next generation energy systems

Abstract: Energy systems increasingly involve large numbers of agents (for example, the buildings in a district) each with local decision making capabilities, that interact through their common use of shared resources (for example, the energy conversion devices housed in an "energy hub"). The mutual constraints imposed by the shared resources require the actions of the agents to be coordinated. This can be accomplished through the exchange of information about global considerations of the system, provided, for example, by a central aggregator who can impose prices on certain actions, or mutual constraints among the agents. Since the local decision making of each agent typically also involves a local cost and local constraints, the question that arises is what type of information exchange ensures that the decisions of the players converge to a sensible global fixed point. In many cases the process is further complicated by the presence of uncertainty, for example about the weather, hence energy demand and renewable generation; in such cases, the decisions of the agents not only have to be coordinated with their peers, but also be robust against this uncertainty. In this talk we will discuss how such questions can be addressed using tools from distributed optimisation, robust optimisation, and game theory. The discussion will be motivated by applications to energy management in districts.

Bio: John Lygeros completed a B.Eng. degree in electrical engineering in  1990 and an M.Sc. degree in Systems Control in 1991, both at Imperial College of Science Technology and Medicine, London, U.K.. In 1996 he obtained a Ph.D. degree from the Electrical Engineering and Computer Sciences Department, University of California, Berkeley. During the period 1996-2000 he held a series of research appointments at the National Automated Highway Systems Consortium, Berkeley, the Laboratory for Computer Science, M.I.T., and the Electrical Engineering and Computer Sciences Department at U.C. Berkeley. Between 2000 and 2003 he was a University Lecturer at the Department of Engineering, University of Cambridge, U.K., and a Fellow of Churchill College. Between 2003 and 2006 he was an Assistant Professor at the Department of Electrical and Computer Engineering, University of Patras, Greece. In July 2006 he joined the Automatic Control Laboratory at ETH Zurich, first as an Associate Professor, and since January 2010 as a Full Professor; he is currently serving as the Head of the laboratory. His research interests include modelling, analysis, and control of hierarchical, hybrid, and stochastic systems, with applications to biochemical networks, automated highway systems, air traffic management, energy systems, and camera networks. John Lygeros is a Fellow of the IEEE, and a member of the IET and the Technical Chamber of Greece; since 2013 he is serving as the Treasurer of the International Federation of Automatic Control (IFAC) and a member of the IFAC Council.

30-May-2019
Title: Automatic extrinsic calibration between a camera and a 3D Lidar using a 3D point and plane correspondences 
Abstract: This paper proposes an automated method to obtain the extrinsic calibration parameters between a camera and a 3D lidar with as low as 16 beams. We use a checkerboard as a reference to obtain features of interest in both sensor frames. The calibration board center point and normal vector are automatically extracted from the lidar point cloud by exploiting the geometry of the board. The corresponding features in the camera image are obtained from the camera's extrinsic matrix. We explain the reasons behind selecting these features, and why they are more robust compared to other possibilities. To obtain the optimal extrinsic parameters, we choose a genetic algorithm to address the highly non-linear state space. The process is automated after defining the bounds of the 3D experimental region relative to the lidar, and the true board dimensions. In addition, the camera is assumed to be intrinsically calibrated. Our method requires a minimum of 3 checkerboard poses, and the calibration accuracy is demonstrated by evaluating our algorithm using real world and simulated features.
Bio: Surabhi Verma is a research affiliate in the Intelligent Transportation Systems group. She has majorly worked on automating the process of finding the camera-lidar extrinsics since she joined ACFR in July last year. Prior to this, she completed her Bachelor degree in Electrical and Electronics Engineering in Visvesvaraya National Institute of Technology, India. During her undergraduate, she worked in IvLabs, the robotics lab of her Institute, on projects relating to robotics and automation. In her senior year, she did an internship at the Technical University of Munich in the Robotics and Embedded Systems group. Here, she developed a framework which included complex non-convex polygons (with or without holes) for collision checking and hand-crafted traffic scenarios for CommonRoad - a motion planning benchmark to evaluate and compare motion planners. Her research interests include intelligent vehicles, mobile perception, and planning. 
  • No labels