Papers by Christian Laugier
HAL (Le Centre pour la Communication Scientifique Directe), Oct 1, 2009
2022 17th International Conference on Control, Automation, Robotics and Vision (ICARCV), Dec 11, 2022
CiteSeer X (The Pennsylvania State University), 2003
arXiv (Cornell University), Jul 27, 2021
2023 IEEE International Conference on Robotics and Automation (ICRA)
HAL (Le Centre pour la Communication Scientifique Directe), Sep 27, 2021
HAL (Le Centre pour la Communication Scientifique Directe), Sep 27, 2021
2022 17th International Conference on Control, Automation, Robotics and Vision (ICARCV)
Semantic grids are a useful representation of the environment around a robot. They can be used in... more Semantic grids are a useful representation of the environment around a robot. They can be used in autonomous vehicles to concisely represent the scene around the car, capturing vital information for downstream tasks like navigation or collision assessment. Information from different sensors can be used to generate these grids. Some methods rely only on RGB images, whereas others choose to incorporate information from other sensors, such as radar or LiDAR. In this paper, we present an architecture that fuses LiDAR and camera information to generate semantic grids. By using the 3D information from a LiDAR point cloud, the LiDAR-Aided Perspective Transform Network (LAPTNet) is able to associate features in the camera plane to the bird's eye view without having to predict any depth information about the scene. Compared to state-of-theart camera-only methods, LAPTNet achieves an improvement of up to 8.8 points (or 38.13%) over state-of-art competing approaches for the classes proposed in the NuScenes dataset validation split.
2022 17th International Conference on Control, Automation, Robotics and Vision (ICARCV)
Semantic grids are a succinct and convenient approach to represent the environment for mobile rob... more Semantic grids are a succinct and convenient approach to represent the environment for mobile robotics and autonomous driving applications. While the use of Lidar sensors is now generalized in robotics, most semantic grid prediction approaches in the literature focus only on RGB data. In this paper, we present an approach for semantic grid prediction that uses a transformer architecture to fuse Lidar sensor data with RGB images from multiple cameras. Our proposed method, TransFuseGrid, first transforms both input streams into topview embeddings, and then fuses these embeddings at multiple scales with Transformers. Finally, a decoder transforms the fused, top-view feature map into a semantic grid of the vehicle's environment. We evaluate the performance of our approach on the nuScenes dataset for the vehicle, drivable area, lane divider and walkway segmentation tasks. The results show that Trans-FuseGrid achieves superior performance than competing RGBonly and Lidar-only methods. Additionally, the Transformer feature fusion leads to a significative improvement over naive RGB-Lidar concatenation. In particular, for the segmentation of vehicles, our model outperforms state-of-the-art RGB-only and Lidar-only methods by 24% and 53%, respectively.
2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)
Forecasting the motion of surrounding traffic is one of the key challenges in the quest to achiev... more Forecasting the motion of surrounding traffic is one of the key challenges in the quest to achieve safe autonomous driving technology. Current state-of-the-art deep forecasting architectures are capable of producing impressive results. However, in many cases, they also output completely unreasonable trajectories, making them unsuitable for deployment. In this work, we present a deep forecasting architecture that leverages the map lane centerlines available in recent datasets to predict sensible trajectories; that is, trajectories that conform to the road layout, agree with the observed dynamics of the target, and react to the presence of surrounding agents. To model such sensible behavior, the proposed architecture first predicts the lane or lanes that the target agent is likely to follow. Then, a navigational goal along each candidate lane is predicted, allowing the regression of the final trajectory in a laneand goal-oriented manner. Our experiments in the Argoverse dataset show that our architecture achieves performance onpar with lane-oriented state-of-the-art forecasting approaches and not far behind goal-oriented approaches, while consistently producing sensible trajectories.
2022 IEEE Intelligent Vehicles Symposium (IV)
Reliably predicting future occupancy of highly dynamic urban environments is an important precurs... more Reliably predicting future occupancy of highly dynamic urban environments is an important precursor for safe autonomous navigation. Common challenges in the prediction include forecasting the relative position of other vehicles, modelling the dynamics of vehicles subjected to different traffic conditions, and vanishing surrounding objects. To tackle these challenges, we propose a spatio-temporal prediction network pipeline that takes the past information from the environment and semantic labels separately for generating future occupancy predictions. Compared to the current SOTA, our approach predicts occupancy for a longer horizon of 3 seconds and in a relatively complex environment from the nuScenes dataset. Our experimental results demonstrate the ability of spatiotemporal networks to understand scene dynamics without the need for HD-Maps and explicit modeling dynamic objects. We publicly release our occupancy grid dataset based on nuScenes to support further research.
2022 IEEE Intelligent Vehicles Symposium (IV)
Testing and validating advanced automotive software is of paramount importance to guarantee safet... more Testing and validating advanced automotive software is of paramount importance to guarantee safety and quality. While real-world testing is highly demanding and simulation testing is not reliable, we propose a new augmented reality framework that takes advantage of both environments. This new testing methodology is intended to be a bridge between Vehicle-in-the-Loop and real-world testing. It enables to easily and safely place the whole vehicle and all its software, from perception to control, in realistic test conditions. This framework provides a flexible way to introduce any virtual element in the outputs of the sensors of the vehicle under test. For each modality of sensing, the framework requires a real time augmentation function that preserves real sensor data and enhances them with virtual data. The LiDAR data augmentation function is presented together with its implementation details. Relying on both qualitative and quantitative analysis of experimental results, the representability of tests scenes generated by the augmented reality framework is finally proven.
2022 International Conference on Robotics and Automation (ICRA)
Uploads
Papers by Christian Laugier