Dear whom it may concern,
My situation is that I have a video player playing video, while this player publishs every frame in the video to the ROS as sensor_msgs/Image type. Then I will use my Xavier to subscribe the images and convert it to dwImageCUDA for DriveNet and LaneNet detection.
As we know the ROS sensor_msgs/Image has following memebers. I think it would be possible to extract the heigth/width/encoding/data into a dwImageCUDA or dwImageCPU format. But I cannot find a good approach to realize it.
std_msgs/Header header
uint32 height
uint32 width
string encoding
uint8 is_bigendian
uint32 step
uint8[] data
My code will look like this:
// Regardless of ros::init and ros::subscriber ....
// Here I am able to subscribe the image, then want to conver it to Nvidia image formats.
// The code is not real, so just let me know if the direction is correct or not.
void callback(sensor_msgs::Image msg)
{
dwImageHandle_t h;
dwImageCPU* imageCPU;
dwImageProperties prop;
prop.heigth=msg.height;
prop.width=msg.width
prop.format = DW_IMAGE_FORMAT_RGBA_UINT8
prop.type= DW_IMAGE_CPU;
dwImage_create(&h, prop, context);
dwImage_getCPU(&imageCPU, h);
// Question1: Following the part to put ros image into imageCPU, but I don't know how.
imageCPU.data = msg.data??
imageCPU.pitch = ???
// Then use streamer from CPU to CUDA.
}
So, there are two questions here:
- As indicated in the Question 1, what is the approperiate way to put an array into the image structure defined by Nvidia?
- My way is like ROS->imageCPU->imageCUDA. I even found opencv is required as a bridge which finally looks like this ROS->cv::Mat->imageCPU->imageCUDA. However, I wonder if there is any more straight forward to do this like ROS->imageCUDA?
Thanks for helping me out.
Best,
Hanyang