I am using the isaac_ros_dnn_image_encoder to convert a RGB8 image to a nitros_tensor_list_nchw_rgb_f32 Tensor.
isaac_ros_dnn_image_encoder — isaac_ros_docs documentation states that the image is cropped, not resized to match the network_image dimensions.
To test this, I encode a message, and then decode it back to a sensor_msgs/msg/Image.msg
On the left is the input image (1920x1080), and on the right is the Tensor, decoded back to an Image (896x896).
It can be seen that the output image is not cropped, but resized with the aspect ratio retained. (note the right image is 1:1 with letter-boxed black bars, while the left image is 16:9)
image_encoder_node = ComposableNode(
name="video_dnn_encoder",
namespace=[LaunchConfiguration("ns"), "/panorama_server"],
package="isaac_ros_dnn_image_encoder",
plugin="nvidia::isaac_ros::dnn_inference::DnnImageEncoderNode",
parameters=[
{
"input_image_width" : 1920,
"input_image_height" : 1080,
"network_image_width": 896,
"network_image_height": 896,
"image_mean": [0.0, 0.0, 0.0],
"image_stddev": [
PIXEL_SCALE_INVERSE,
PIXEL_SCALE_INVERSE,
PIXEL_SCALE_INVERSE,
],
}
],
remappings=[
("encoded_tensor", "tensor_pub"),
("image", ["/", LaunchConfiguration("ns"), "/", VIDEO_INPUT_TOPIC]),
],
)
Could the docs be updated if I am interpreting this behaviour correctly?