This document outlines the ARL Educational Robotic Autonomy Environment v0.1 (ARL-Edu v0.1) which... more This document outlines the ARL Educational Robotic Autonomy Environment v0.1 (ARL-Edu v0.1) which solely relies on open-source projects of our lab and of the community. ARL-Edu offers an immediate opportunity for the students to comprehensively experiment with autonomy issues for both aerial and ground robotic systems. Below, a relevant summary of the specific working environment follows, while details can be found in the relevant repositories. It is noted that the part that relates to Deep Learning and the part that relates to model-based position control and autonomous exploration path planning are separate code workspaces. Although one can use them in a combinatorial way, the instructions provided assume separate use on two distinct ROS workspaces.
Visual Place Recognition (VPR) has seen significant advances at the frontiers of matching perform... more Visual Place Recognition (VPR) has seen significant advances at the frontiers of matching performance and computational superiority over the past few years. However, these evaluations are performed for ground-based mobile platforms and cannot be generalized to aerial platforms. The degree of viewpoint variation experienced by aerial robots is complex, with their processing power and on-board memory limited by payload size and battery ratings. Therefore, in this paper, we collect 8 state-of-the-art VPR techniques that have been previously evaluated for ground-based platforms and compare them on 2 recently proposed aerial place recognition datasets with three prime focuses: a) Matching performance b) Processing power consumption c) Projected memory requirements. This gives a birds-eye view of the applicability of contemporary VPR research to aerial robotics and lays down the the nature of challenges for aerial-VPR.
Developing learning-based methods for navigation of aerial robots is an intensive data-driven pro... more Developing learning-based methods for navigation of aerial robots is an intensive data-driven process that requires highly parallelized simulation. The full utilization of such simulators is hindered by the lack of parallelized high-level control methods that imitate the real-world robot interface. Responding to this need, we develop the Aerial Gym simulator that can simulate millions of multirotor vehicles parallelly with nonlinear geometric controllers for the Special Euclidean Group SE(3) for attitude, velocity and position tracking. We also develop functionalities for managing a large number of obstacles in the environment, enabling rapid randomization for learning of navigation tasks. In addition, we also provide sample environments having robots with simulated cameras capable of capturing RGB, depth, segmentation and optical flow data in obstacle-rich environments. This simulator is a step towards developing acurrently missing-highly parallelized aerial robot simulation with geometric controllers at a large scale, while also providing a customizable obstacle randomization functionality for navigation tasks. We provide training scripts with compatible reinforcement learning frameworks to navigate the robot to a goal setpoint based on attitude and velocity command interfaces. Finally, we open source the simulator and aim to develop it further to speed up rendering using alternate kernel-based frameworks in order to parallelize ray-casting for depth images thus supporting a larger number of robots.
In this work we present a new methodology on learning-based path planning for autonomous explorat... more In this work we present a new methodology on learning-based path planning for autonomous exploration of subterranean environments using aerial robots. Utilizing a recently proposed graph-based path planner as a "training expert" and following an approach relying on the concepts of imitation learning, we derive a trained policy capable of guiding the robot to autonomously explore underground mine drifts and tunnels. The algorithm utilizes only a short window of range data sampled from the onboard LiDAR and achieves an exploratory behavior similar to that of the training expert with a more than an order of magnitude reduction in computational cost, while simultaneously relaxing the need to maintain a consistent and online reconstructed map of the environment. The trained path planning policy is extensively evaluated both in simulation and experimentally within field tests relating to the autonomous exploration of underground mines.
Unmanned Aircraft Systems (UAS) have drawn increasing attention recently, owing to advancements i... more Unmanned Aircraft Systems (UAS) have drawn increasing attention recently, owing to advancements in related research, technology and applications. While having been deployed successfully in military scenarios for decades, civil use cases have lately been tackled by the robotics research community. This chapter overviews the core elements of this highly interdisciplinary field; the reader is guided through the design process of aerial robots for various applications starting with a qualitative characterization of different types of UAS. Design and modeling are closely related, forming a typically iterative process of drafting and analyzing the related properties. Therefore, we overview aerodynamics and dynamics, as well as their application to fixedwing, rotary-wing, and flapping-wing UAS, including related analytical tools and practical guidelines. Respecting use-case specific requirements and core autonomous robot demands, we finally provide guidelines to related system integration challenges.
This paper proposes a method for tight fusion of visual, depth and inertial data in order to exte... more This paper proposes a method for tight fusion of visual, depth and inertial data in order to extend robotic capabilities for navigation in GPS-denied, poorly illuminated, and textureless environments. Visual and depth information are fused at the feature detection and descriptor extraction levels to augment one sensing modality with the other. These multimodal features are then further integrated with inertial sensor cues using an extended Kalman filter to estimate the robot pose, sensor bias terms, and landmark positions simultaneously as part of the filter state. As demonstrated through a set of hand-held and Micro Aerial Vehicle experiments, the proposed algorithm is shown to perform reliably in challenging visually-degraded environments using RGB-D information from a lightweight and low-cost sensor and data from an IMU.
In this paper the problem of autonomous navigation of aerial robots through obscurants is conside... more In this paper the problem of autonomous navigation of aerial robots through obscurants is considered. As visible spectrum cameras and most LiDAR technologies provide degraded data in such conditions, the problem of localization is approached through the fusion of thermal camera data with inertial sensor cues. In particular, a long-wave infrared camera is employed and combined with an inertial measurement unit. The sensor intrinsic and extrinsic parameters are appropriately calibrated, and an Extended Kalman Filter framework that uses direct photometric feedback is employed in order to achieve robust odometry estimation. This framework is capable of accomplishing real-time localization while navigating though obscurants in GPS-denied conditions. Subsequently, an experimental study of autonomous aerial robotic navigation within a smoke-filled machine-shop environment was conducted. The presented results demonstrate the ability of the proposed solution to ensure reliable navigation in such extreme visually- degraded conditions.
SUMMARYA new algorithm, called rapidly exploring random tree of trees (RRTOT) is proposed, that a... more SUMMARYA new algorithm, called rapidly exploring random tree of trees (RRTOT) is proposed, that aims to address the challenge of planning for autonomous structural inspection. Given a representation of a structure, a visibility model of an onboard sensor, an initial robot configuration and constraints, RRTOT computes inspection paths that provide full coverage. Sampling based techniques and a meta-tree structure consisting of multiple RRT* trees are employed to find admissible paths with decreasing cost. Using this approach, RRTOT does not suffer from the limitations of strategies that separate the inspection path planning problem into that of finding the minimum set of observation points and only afterwards compute the best possible path among them. Analysis is provided on the capability of RRTOT to find admissible solutions that, in the limit case, approach the optimal one. The algorithm is evaluated in both simulation and experimental studies. An unmanned rotorcraft equipped with a vision sensor was utilized as the experimental platform and validation of the achieved inspection properties was performed using3Dreconstruction techniques.
Autonomous navigation of microaerial vehicles in environments that are simultaneously GPS‐denied ... more Autonomous navigation of microaerial vehicles in environments that are simultaneously GPS‐denied and visually degraded, and especially in the dark, texture‐less and dust‐ or smoke‐filled settings, is rendered particularly hard. However, a potential solution arises if such aerial robots are equipped with long wave infrared thermal vision systems that are unaffected by darkness and can penetrate many types of obscurants. In response to this fact, this study proposes a keyframe‐based thermal–inertial odometry estimation framework tailored to the exact data and concepts of operation of thermal cameras. The front‐end component of the proposed solution utilizes full radiometric data to establish reliable correspondences between thermal images, as opposed to operating on rescaled data as previous efforts have presented. In parallel, taking advantage of a keyframe‐based optimization back‐end the proposed method is suitable for handling periods of data interruption which are commonly present in thermal cameras, while it also ensures the joint optimization of reprojection errors of 3D landmarks and inertial measurement errors. The developed framework was verified with respect to its resilience, performance, and ability to enable autonomous navigation in an extensive set of experimental studies including multiple field deployments in severely degraded, dark, and obscurants‐filled underground mines.
In this chapter, strategies for Model Predictive Control (MPC) design and implementation for Unma... more In this chapter, strategies for Model Predictive Control (MPC) design and implementation for Unmaned Aerial Vehicles (UAVs) are discussed. This chapter is divided into two main sections. In the first section, modelling, controller design and implementation of MPC for multi-rotor systems is presented. In the second section, we show modelling and controller design techniques for fixed-wing UAVs. System identification techniques are used to derive an estimate of the system model, while state of the art solvers are employed to solve the optimization problem online. By the end of this chapter, the reader should be able to implement an MPC to achieve trajectory tracking for both multi-rotor systems and fixed-wing UAVs.
Within this paper, a new fast algorithm that provides efficient solutions to the problem of inspe... more Within this paper, a new fast algorithm that provides efficient solutions to the problem of inspection path planning for complex 3D structures is presented. The algorithm assumes a triangular mesh representation of the structure and employs an alternating two-step optimization paradigm to find good viewpoints that together provide full coverage and a connecting path that has low cost. In every iteration, the viewpoints are chosen such that the connection cost is reduced and, subsequently, the tour is optimized. Vehicle and sensor limitations are respected within both steps. Sample implementations are provided for rotorcraft and fixed-wing unmanned aerial systems. The resulting algorithm characteristics are evaluated using simulation studies as well as multiple realworld experimental test-cases with both vehicle types.
In this research we present a novel algorithm for background subtraction using a moving camera. O... more In this research we present a novel algorithm for background subtraction using a moving camera. Our algorithm is based purely on visual information obtained from a camera mounted on an electric bus, operating in downtown Reno which automatically detects moving objects of interest with the view to provide a fully autonomous vehicle. In our approach we exploit the optical flow vectors generated by the motion of the camera while keeping parameter assumptions a minimum. At first, we estimate the Focus of Expansion, which is used to model and simulate 3D points given the intrinsic parameters of the camera, and perform multiple linear regression to estimate the regression equation parameters and implement on the real data set of every frame to identify moving objects. We validated our algorithm using data taken from a common bus route.
2022 International Conference on Robotics and Automation (ICRA), May 23, 2022
This paper contributes a method to design a novel navigation planner exploiting a learning-based ... more This paper contributes a method to design a novel navigation planner exploiting a learning-based collision prediction network. The neural network is tasked to predict the collision cost of each action sequence in a predefined motion primitives library in the robot's velocity-steering angle space, given only the current depth image and the estimated linear and angular velocities of the robot. Furthermore, we account for the uncertainty of the robot's partial state by utilizing the Unscented Transform and the uncertainty of the neural network model by using Monte Carlo dropout. The uncertainty-aware collision cost is then combined with the goal direction given by a global planner in order to determine the best action sequence to execute in a receding horizon manner. To demonstrate the method, we develop a resilient small flying robot integrating lightweight sensing and computing resources. A set of simulation and experimental studies, including a field deployment, in both cluttered and perceptually-challenging environments is conducted to evaluate the quality of the prediction network and the performance of the proposed planner.
ABSTRACT This paper addresses the problem of force and position control for an unmanned coaxial r... more ABSTRACT This paper addresses the problem of force and position control for an unmanned coaxial rotorcraft physically interacting with its environment through contact. The proposed control strategy equips the unmanned aerial robot with the capability to safely establish contact with the surfaces of its environment and apply desired forces on them while performing sliding maneuvers. A hybrid force/position control scheme is implemented, with the force controller being activated once contact is detected. Contact information is derived from force measurements and a hysteresis-based contact detection strategy. Extended experimental studies are conducted to evaluate the efficiency of the proposed methods.
In this research, the problem of background subtraction is addressed using a single static camera... more In this research, the problem of background subtraction is addressed using a single static camera. Aside from the practicality of distinguishing foreground moving objects from background scenes, background subtraction is an essential step towards classifying and tracking objects in complex and dynamic environments. Our proposed method is based on the temporal averaging of individual pixels over a small training sample and the modeling of pixel intensities with a log-normal probability density function that best fits the divergence among background pixels. Our method has been tested in a series of different and challenging environments with illumination changes as well as high speed foreground objects with the view to be used in autonomous vehicles applications for pedestrian and car detection. The results from this research are juxtaposed against the state-of-the-art methods and demonstrate the efficiency of our approach.
This paper presents a methodology to achieve Robotic Aerial Tracking of a mobile – human – subjec... more This paper presents a methodology to achieve Robotic Aerial Tracking of a mobile – human – subject within a previously-unmapped environment, potentially cluttered with unknown structures. The proposed system initially employs a high-end Unmanned Aerial Vehicle, capable of fully-autonomous estimation and flight control. This platform also carries a high-level Perception and Navigation Unit, which performs the tasks of 3D-visual perception, subject detection, segmentation, and tracking, which allows the aerial system to follow the human subject as they perform free unscripted motion, in the perceptual – and equally importantly – in the mobile sense. To this purpose, a navigation synthesis which relies on an attractive/repulsive forces-based approach and collision-free path planning algorithms is integrated into the scheme. Employing an incrementally-built map model which accounts for the ground subject’s and the aerial vehicle’s motion constraints, the Robotic Aerial Tracker system is capable of achieving continuous tracking and reacquisition of the mobile target.
2022 International Conference on Unmanned Aircraft Systems (ICUAS), Jun 21, 2022
This work presents the design, hardware realization, autonomous exploration and object detection ... more This work presents the design, hardware realization, autonomous exploration and object detection capabilities of RMF-Owl, a new collision-tolerant aerial robot tailored for resilient autonomous subterranean exploration. The system is custom built for underground exploration with focus on collision tolerance, resilient autonomy with robust localization and mapping, alongside high-performance exploration path planning in confined, obstacle-filled and topologically complex underground environments. Moreover, RMF-Owl offers the ability to search, detect and locate objects of interest which can be particularly useful in search and rescue missions. A series of results from field experiments are presented in order to demonstrate the system's ability to autonomously explore challenging unknown underground environments.
This document outlines the ARL Educational Robotic Autonomy Environment v0.1 (ARL-Edu v0.1) which... more This document outlines the ARL Educational Robotic Autonomy Environment v0.1 (ARL-Edu v0.1) which solely relies on open-source projects of our lab and of the community. ARL-Edu offers an immediate opportunity for the students to comprehensively experiment with autonomy issues for both aerial and ground robotic systems. Below, a relevant summary of the specific working environment follows, while details can be found in the relevant repositories. It is noted that the part that relates to Deep Learning and the part that relates to model-based position control and autonomous exploration path planning are separate code workspaces. Although one can use them in a combinatorial way, the instructions provided assume separate use on two distinct ROS workspaces.
Visual Place Recognition (VPR) has seen significant advances at the frontiers of matching perform... more Visual Place Recognition (VPR) has seen significant advances at the frontiers of matching performance and computational superiority over the past few years. However, these evaluations are performed for ground-based mobile platforms and cannot be generalized to aerial platforms. The degree of viewpoint variation experienced by aerial robots is complex, with their processing power and on-board memory limited by payload size and battery ratings. Therefore, in this paper, we collect 8 state-of-the-art VPR techniques that have been previously evaluated for ground-based platforms and compare them on 2 recently proposed aerial place recognition datasets with three prime focuses: a) Matching performance b) Processing power consumption c) Projected memory requirements. This gives a birds-eye view of the applicability of contemporary VPR research to aerial robotics and lays down the the nature of challenges for aerial-VPR.
Developing learning-based methods for navigation of aerial robots is an intensive data-driven pro... more Developing learning-based methods for navigation of aerial robots is an intensive data-driven process that requires highly parallelized simulation. The full utilization of such simulators is hindered by the lack of parallelized high-level control methods that imitate the real-world robot interface. Responding to this need, we develop the Aerial Gym simulator that can simulate millions of multirotor vehicles parallelly with nonlinear geometric controllers for the Special Euclidean Group SE(3) for attitude, velocity and position tracking. We also develop functionalities for managing a large number of obstacles in the environment, enabling rapid randomization for learning of navigation tasks. In addition, we also provide sample environments having robots with simulated cameras capable of capturing RGB, depth, segmentation and optical flow data in obstacle-rich environments. This simulator is a step towards developing acurrently missing-highly parallelized aerial robot simulation with geometric controllers at a large scale, while also providing a customizable obstacle randomization functionality for navigation tasks. We provide training scripts with compatible reinforcement learning frameworks to navigate the robot to a goal setpoint based on attitude and velocity command interfaces. Finally, we open source the simulator and aim to develop it further to speed up rendering using alternate kernel-based frameworks in order to parallelize ray-casting for depth images thus supporting a larger number of robots.
In this work we present a new methodology on learning-based path planning for autonomous explorat... more In this work we present a new methodology on learning-based path planning for autonomous exploration of subterranean environments using aerial robots. Utilizing a recently proposed graph-based path planner as a "training expert" and following an approach relying on the concepts of imitation learning, we derive a trained policy capable of guiding the robot to autonomously explore underground mine drifts and tunnels. The algorithm utilizes only a short window of range data sampled from the onboard LiDAR and achieves an exploratory behavior similar to that of the training expert with a more than an order of magnitude reduction in computational cost, while simultaneously relaxing the need to maintain a consistent and online reconstructed map of the environment. The trained path planning policy is extensively evaluated both in simulation and experimentally within field tests relating to the autonomous exploration of underground mines.
Unmanned Aircraft Systems (UAS) have drawn increasing attention recently, owing to advancements i... more Unmanned Aircraft Systems (UAS) have drawn increasing attention recently, owing to advancements in related research, technology and applications. While having been deployed successfully in military scenarios for decades, civil use cases have lately been tackled by the robotics research community. This chapter overviews the core elements of this highly interdisciplinary field; the reader is guided through the design process of aerial robots for various applications starting with a qualitative characterization of different types of UAS. Design and modeling are closely related, forming a typically iterative process of drafting and analyzing the related properties. Therefore, we overview aerodynamics and dynamics, as well as their application to fixedwing, rotary-wing, and flapping-wing UAS, including related analytical tools and practical guidelines. Respecting use-case specific requirements and core autonomous robot demands, we finally provide guidelines to related system integration challenges.
This paper proposes a method for tight fusion of visual, depth and inertial data in order to exte... more This paper proposes a method for tight fusion of visual, depth and inertial data in order to extend robotic capabilities for navigation in GPS-denied, poorly illuminated, and textureless environments. Visual and depth information are fused at the feature detection and descriptor extraction levels to augment one sensing modality with the other. These multimodal features are then further integrated with inertial sensor cues using an extended Kalman filter to estimate the robot pose, sensor bias terms, and landmark positions simultaneously as part of the filter state. As demonstrated through a set of hand-held and Micro Aerial Vehicle experiments, the proposed algorithm is shown to perform reliably in challenging visually-degraded environments using RGB-D information from a lightweight and low-cost sensor and data from an IMU.
In this paper the problem of autonomous navigation of aerial robots through obscurants is conside... more In this paper the problem of autonomous navigation of aerial robots through obscurants is considered. As visible spectrum cameras and most LiDAR technologies provide degraded data in such conditions, the problem of localization is approached through the fusion of thermal camera data with inertial sensor cues. In particular, a long-wave infrared camera is employed and combined with an inertial measurement unit. The sensor intrinsic and extrinsic parameters are appropriately calibrated, and an Extended Kalman Filter framework that uses direct photometric feedback is employed in order to achieve robust odometry estimation. This framework is capable of accomplishing real-time localization while navigating though obscurants in GPS-denied conditions. Subsequently, an experimental study of autonomous aerial robotic navigation within a smoke-filled machine-shop environment was conducted. The presented results demonstrate the ability of the proposed solution to ensure reliable navigation in such extreme visually- degraded conditions.
SUMMARYA new algorithm, called rapidly exploring random tree of trees (RRTOT) is proposed, that a... more SUMMARYA new algorithm, called rapidly exploring random tree of trees (RRTOT) is proposed, that aims to address the challenge of planning for autonomous structural inspection. Given a representation of a structure, a visibility model of an onboard sensor, an initial robot configuration and constraints, RRTOT computes inspection paths that provide full coverage. Sampling based techniques and a meta-tree structure consisting of multiple RRT* trees are employed to find admissible paths with decreasing cost. Using this approach, RRTOT does not suffer from the limitations of strategies that separate the inspection path planning problem into that of finding the minimum set of observation points and only afterwards compute the best possible path among them. Analysis is provided on the capability of RRTOT to find admissible solutions that, in the limit case, approach the optimal one. The algorithm is evaluated in both simulation and experimental studies. An unmanned rotorcraft equipped with a vision sensor was utilized as the experimental platform and validation of the achieved inspection properties was performed using3Dreconstruction techniques.
Autonomous navigation of microaerial vehicles in environments that are simultaneously GPS‐denied ... more Autonomous navigation of microaerial vehicles in environments that are simultaneously GPS‐denied and visually degraded, and especially in the dark, texture‐less and dust‐ or smoke‐filled settings, is rendered particularly hard. However, a potential solution arises if such aerial robots are equipped with long wave infrared thermal vision systems that are unaffected by darkness and can penetrate many types of obscurants. In response to this fact, this study proposes a keyframe‐based thermal–inertial odometry estimation framework tailored to the exact data and concepts of operation of thermal cameras. The front‐end component of the proposed solution utilizes full radiometric data to establish reliable correspondences between thermal images, as opposed to operating on rescaled data as previous efforts have presented. In parallel, taking advantage of a keyframe‐based optimization back‐end the proposed method is suitable for handling periods of data interruption which are commonly present in thermal cameras, while it also ensures the joint optimization of reprojection errors of 3D landmarks and inertial measurement errors. The developed framework was verified with respect to its resilience, performance, and ability to enable autonomous navigation in an extensive set of experimental studies including multiple field deployments in severely degraded, dark, and obscurants‐filled underground mines.
In this chapter, strategies for Model Predictive Control (MPC) design and implementation for Unma... more In this chapter, strategies for Model Predictive Control (MPC) design and implementation for Unmaned Aerial Vehicles (UAVs) are discussed. This chapter is divided into two main sections. In the first section, modelling, controller design and implementation of MPC for multi-rotor systems is presented. In the second section, we show modelling and controller design techniques for fixed-wing UAVs. System identification techniques are used to derive an estimate of the system model, while state of the art solvers are employed to solve the optimization problem online. By the end of this chapter, the reader should be able to implement an MPC to achieve trajectory tracking for both multi-rotor systems and fixed-wing UAVs.
Within this paper, a new fast algorithm that provides efficient solutions to the problem of inspe... more Within this paper, a new fast algorithm that provides efficient solutions to the problem of inspection path planning for complex 3D structures is presented. The algorithm assumes a triangular mesh representation of the structure and employs an alternating two-step optimization paradigm to find good viewpoints that together provide full coverage and a connecting path that has low cost. In every iteration, the viewpoints are chosen such that the connection cost is reduced and, subsequently, the tour is optimized. Vehicle and sensor limitations are respected within both steps. Sample implementations are provided for rotorcraft and fixed-wing unmanned aerial systems. The resulting algorithm characteristics are evaluated using simulation studies as well as multiple realworld experimental test-cases with both vehicle types.
In this research we present a novel algorithm for background subtraction using a moving camera. O... more In this research we present a novel algorithm for background subtraction using a moving camera. Our algorithm is based purely on visual information obtained from a camera mounted on an electric bus, operating in downtown Reno which automatically detects moving objects of interest with the view to provide a fully autonomous vehicle. In our approach we exploit the optical flow vectors generated by the motion of the camera while keeping parameter assumptions a minimum. At first, we estimate the Focus of Expansion, which is used to model and simulate 3D points given the intrinsic parameters of the camera, and perform multiple linear regression to estimate the regression equation parameters and implement on the real data set of every frame to identify moving objects. We validated our algorithm using data taken from a common bus route.
2022 International Conference on Robotics and Automation (ICRA), May 23, 2022
This paper contributes a method to design a novel navigation planner exploiting a learning-based ... more This paper contributes a method to design a novel navigation planner exploiting a learning-based collision prediction network. The neural network is tasked to predict the collision cost of each action sequence in a predefined motion primitives library in the robot's velocity-steering angle space, given only the current depth image and the estimated linear and angular velocities of the robot. Furthermore, we account for the uncertainty of the robot's partial state by utilizing the Unscented Transform and the uncertainty of the neural network model by using Monte Carlo dropout. The uncertainty-aware collision cost is then combined with the goal direction given by a global planner in order to determine the best action sequence to execute in a receding horizon manner. To demonstrate the method, we develop a resilient small flying robot integrating lightweight sensing and computing resources. A set of simulation and experimental studies, including a field deployment, in both cluttered and perceptually-challenging environments is conducted to evaluate the quality of the prediction network and the performance of the proposed planner.
ABSTRACT This paper addresses the problem of force and position control for an unmanned coaxial r... more ABSTRACT This paper addresses the problem of force and position control for an unmanned coaxial rotorcraft physically interacting with its environment through contact. The proposed control strategy equips the unmanned aerial robot with the capability to safely establish contact with the surfaces of its environment and apply desired forces on them while performing sliding maneuvers. A hybrid force/position control scheme is implemented, with the force controller being activated once contact is detected. Contact information is derived from force measurements and a hysteresis-based contact detection strategy. Extended experimental studies are conducted to evaluate the efficiency of the proposed methods.
In this research, the problem of background subtraction is addressed using a single static camera... more In this research, the problem of background subtraction is addressed using a single static camera. Aside from the practicality of distinguishing foreground moving objects from background scenes, background subtraction is an essential step towards classifying and tracking objects in complex and dynamic environments. Our proposed method is based on the temporal averaging of individual pixels over a small training sample and the modeling of pixel intensities with a log-normal probability density function that best fits the divergence among background pixels. Our method has been tested in a series of different and challenging environments with illumination changes as well as high speed foreground objects with the view to be used in autonomous vehicles applications for pedestrian and car detection. The results from this research are juxtaposed against the state-of-the-art methods and demonstrate the efficiency of our approach.
This paper presents a methodology to achieve Robotic Aerial Tracking of a mobile – human – subjec... more This paper presents a methodology to achieve Robotic Aerial Tracking of a mobile – human – subject within a previously-unmapped environment, potentially cluttered with unknown structures. The proposed system initially employs a high-end Unmanned Aerial Vehicle, capable of fully-autonomous estimation and flight control. This platform also carries a high-level Perception and Navigation Unit, which performs the tasks of 3D-visual perception, subject detection, segmentation, and tracking, which allows the aerial system to follow the human subject as they perform free unscripted motion, in the perceptual – and equally importantly – in the mobile sense. To this purpose, a navigation synthesis which relies on an attractive/repulsive forces-based approach and collision-free path planning algorithms is integrated into the scheme. Employing an incrementally-built map model which accounts for the ground subject’s and the aerial vehicle’s motion constraints, the Robotic Aerial Tracker system is capable of achieving continuous tracking and reacquisition of the mobile target.
2022 International Conference on Unmanned Aircraft Systems (ICUAS), Jun 21, 2022
This work presents the design, hardware realization, autonomous exploration and object detection ... more This work presents the design, hardware realization, autonomous exploration and object detection capabilities of RMF-Owl, a new collision-tolerant aerial robot tailored for resilient autonomous subterranean exploration. The system is custom built for underground exploration with focus on collision tolerance, resilient autonomy with robust localization and mapping, alongside high-performance exploration path planning in confined, obstacle-filled and topologically complex underground environments. Moreover, RMF-Owl offers the ability to search, detect and locate objects of interest which can be particularly useful in search and rescue missions. A series of results from field experiments are presented in order to demonstrate the system's ability to autonomously explore challenging unknown underground environments.
Uploads
Papers by Kostas Alexis