Vineet Suryan
October 11, 2022
Reading time:
Self-driving cars have the potential to change the paradigm of transportation. According to the U.S. Department of Transportation National Motor Vehicle Crash Causation Survey, 93% of all vehicle accidents are influenced by human error. Eliminating those accidents would be a giant leap forward, a safer means of transportation.
However, developing autonomous driving systems requires a tremendous amount of training images, usually collected and labelled by human labor, which is costly and error-prone. To make things worse, gathering such a vast amount of real driving images is challenging because we cannot artificially make unusual corner cases or peculiar weather and lighting conditions.
Over the past years, synthesized datasets from 3D game engines are gaining wide acceptance as a viable solution to tackle the problem. Besides these advances, monitoring and validating the data generation process is often still time-consuming and challenging.
Motivated by these observations, we implemented Carlafox, an open-source web-based CARLA visualizer that takes one step towards democratizing the daunting task of dataset generation. Essentially making image synthesis and automatic ground truth data generation maintainable, cheaper, and more repeatable.
CARLA is a 3D open-source simulator for autonomous driving. It provides methods for spawning various pre-defined vehicle models into the map, which can be controlled via a built-in lane following autopilot or by custom algorithms. CARLA comes with various maps that simulate environments from urban centers to countryside roads, including environmental presets for the time of the day and weather conditions.
Though CARLA is a 3D simulator, it does not have a built-in visualizer for any data other than simply viewing the scene. The Python example scripts included with CARLA use PyGame to display graphical user interfaces and do basic sensor data visualization; however, they are not capable of visualizing 3D LIDAR data or any combination of various sensors like bounding boxes projected on the camera data.
Carlaviz is a third-party web-based CARLA visualizer that can combine multiple data streams in a single visualization. Although, the layout and ability to customize data is limited.
With Carlafox, we take it a step further by providing a streamlined solution to visualize both recorded and live CARLA simulations.
To visualize the CARLA simulation, we first have to understand the CARLA actors and sensor capabilities.
The CARLA simulator allows to easily modify and place on-board sensors such as RGB cameras, depth cameras, radar, IMU, LiDAR, semantic LIDAR, weather conditions, and also the traffic scene to perform specific traffic cases.
The CARLA API supports custom sensor configurations as well. For example, it makes it possible to replicate a specific LIDAR configuration used on a real car.
CARLA provides a simple API to populate the scene with what they call Actors; that includes not only vehicles and walkers but also sensors, traffic signs, and traffic lights. In addition, instead of populating the world manually, CARLA comes with a traffic simulation, which comes in handy to automatically create a rich environment to train and test various autonomous driving stacks.
To visualize the CARLA data in Foxglove, we need to convert it to a format that Foxglove understands. Out of the box, Foxglove supports data via a running ROS1/ROS2 connection (i.e., a live simulation) or from a recorded ROS .bag file. To that end, we adapted and optimized the ROS-bridge project, which acts as a translation layer between CARLA and Foxglove and converts each CARLA sensor into a ROS message that Foxglove understands:
sensor_msgs/CompressedImage
sensor_msgs/PointCloud2
nav_msgs/OccupancyGrid
geometry_msgs/PoseStamped
, sensor_msgs/NavSatFix
, tf2_msgs/TFMessage
foxglove_msgs/ImageMarkerArray
Next, we created a Foxglove layout that visualizes all this data. The layout features three main use cases: perception, planning, and diagnostics.
Drag and drop your own ROS bag files into Foxglove to get an immediate visual insight into your CARLA data. Or connect to a live simulation – we provide a live demo environment to test the setup quickly.
To see Carlafox in action, click here to give our demo a try. You can also visit our GitHub repo to obtain the source code, which is readily available.
Our work could not have been possible without the help of countless open-source resources. We hope our contributions to Carlafox, Foxglove, and CARLA will help others in the automotive community to build the next generation of innovative technology.
If you have questions or ideas on how to visualize your data, join us on our Gitter #lounge channel or leave a comment below.
08/10/2024
Having multiple developers work on pre-merge testing distributes the process and ensures that every contribution is rigorously tested before…
15/08/2024
After rigorous debugging, a new unit testing framework was added to the backend compiler for NVK. This is a walkthrough of the steps taken…
01/08/2024
We're reflecting on the steps taken as we continually seek to improve Linux kernel integration. This will include more detail about the…
27/06/2024
With each board running a mainline-first Linux software stack and tested in a CI loop with the LAVA test framework, the Farm showcased Collabora's…
26/06/2024
WirePlumber 0.5 arrived recently with many new and essential features including the Smart Filter Policy, enabling audio filters to automatically…
12/06/2024
Part 3 of the cmtp-responder series with a focus on USB gadgets explores several new elements including a unified build environment with…
Comments (0)
Add a Comment