-
Notifications
You must be signed in to change notification settings - Fork 1
Software ‐ Ros2 and Docker
Our Ros2 nodes run in NVIDIA's Isaac ROS Dev Docker image.
The Docker mounts a ROS2 workspace directory from the jetson, /data/workspaces/isaac-ros-dev. Currently all our development happens in this directory and is only stored locally on the Jetson, not on git. A working snapshot of the directory from 21.06.2024 is in Google Drive here, in case it is needed.
The alias l-docker-jetson runs the /data/workspaces/isaac-ros-dev/src/isaac_ros_common/scripts/run_recording.shscript. This script is a modified version of the run_dev.sh script in the same directory that ships with the NVIDIA dev env. Both of the scripts either run or attach to a running instance of the pre-built NVIDIA Isaac ROS Docker image.
In order to run different common tasks in the foreground, the run_recording.sh script takes command arguments to execute in the container. l-docker-jetson -h:
Usage: run_recording.sh [OPTIONS]
Options:
-c, --command Specify the command to pass to Docker run. Available commands:
start_recording <run_id> [topic names] - Start ROS2 bag recording with specified run ID and topics.
hdr_start - Start the HDR camera using predefined settings.
ros1_bridge_start - Start the ROS1 bridge for interfacing with ROS2.
Any other command will attempt to execute directly in the container.
Examples:
./run_recording.sh -c "hdr_start"
./run_recording.sh -c "start_recording 12345 topic1 topic2 topic3"
If no command is provided, the script will open an interactive login shell.
You can add more commands in the workspaces/isaac-ros-dev/src/isaac_ros_common/docker/scripts/entrypoint_commands.sh
The l-jetson startup command opens 3 tmux panes. 1 for ROS1 in the foreground, one as an interactive shell on the jetson, and a new one that runs l-docker-jetson -c hdr_start. Therefore, the HDR cameras will be started when l-jetson is run. See below for details on which ROS2 nodes/processes this encompasses.
The normal rosbag_record_coordinator node can be used to record the ROS2 topics as well. Currently it is hardcoded to only do so for bags called "hdr", but this is easily expanded. The ROS2 topics can be listed under hdr in the config yaml files the same as ROS1 topics.
The recording is done with MCAP files instead of Rosbags, but they are stored in the same output directory as the rest of the ROS1 data, in an hdr subdir: data/run_id/hdr/xyz.mcap
To see available topics, attach to the docker with l-docker-jetson and run ros2 topic list
TODO: Clean up the rosbag_record_node python script so the different formats aren't hardcoded into the script (Also covers Zed2i .svo files)
To see ROS2 topics in ROS1, you need to run a ros1_bridge node in docker. This node requires a different environment to run than the recording, so it must be done in separate shell.
the l-docker-visualization alias calls an alternate jetson_visualization launch script that opens 4 tmux panes, with the 4th one being l-docker-jetson -c ros1_bridge_start.
The ROS2 topics will then show up with rostopic list on the Jetson.
Note: building ros1_bridge takes ~ an hour, so make sure to exclude it with
colcon build --symlink-install --packages-skip ros1_bridgeif you're rebuilding the other nodes.
TODO: These topics only show up on the Jetson, and not the OPC. You can
ssh -X jetsonand runrqt_image_viewto see the video streams, but we need to figure out how to the topics to the OPC so they can show up in RVIZ.
To record GPU-compressed imagery from the HDR cameras, several processes need to run:
- 3
v4l2_cameranodes, one for each camera. - 3
isaac_ros_image_proc ResizeNodenodes to downsize the imagery from 1920x1280 to 1632x1088, the max resolution allowed by the NVIDIA compression.* - 3
isaac_ros_image_proc ImageFormatConverterNodenodes to adjust the image format to rgb8, which is the only format allowed by the NVIDIA compression. - 3
isaac_ros_h264_encodernodes to encode the image data to h264 format.
*The Nvidia docs say the max resolution is 1920x1200 but that doesn't work, failing with the super vague error:
videoencoder_request.cpp@382: Failed to copy the input buffer: invalid argumentBinary searching the max resolution, I found that 1632x1088 works, and 1680x1120 doesn't. Nvidia also says the width and height must be even, so this is what I've settled on while retaining the aspect ratio.
Nodes 2-4 are all from the Isaac ROS suite, and are built from source with cloned repos from NVIDIA. We probably want to fork the ones that I've made small adjustments to.
1-4 are run using ROS2 composable nodes, so they all run in one process. This produces topics from /gt_box/hdr_x/image_raw to /gt_box/hdr_x/image_compressed and all the intermediary stages. The NVIDIA nodes keep all the data on the GPU and don't copy it, using isaac_ros_nitros, so this all has minimal overhead.
The single node encompassing 1-4 is called grand_tour_box and can be launched with:
ros2 launch grand_tour_box grand_tour_box_recording.launch.py
The 3 hdr_camera nodes can be launched on their own with ros2 launch grand_tour_box hdr_cameras.launch.py
Presented by RSL.