I’ve been trying for a while to wrap data from a camera stream obtained via videoSource::Capture into a VPIImage but failed repeatedly. The VPI documentation examples only seem to use OpenCV as the video/image sources.
How can I wrap the uchar3* image output from videoSource::Capture into a VPIImage? (I would like to take advantage of managed memory on Jetson Nano).
Blockquote
Is videoSource::Capture the OpenCV video source?
No, the ‘videoSource’ I refer to is the class from jetson-utils/video, which captures camera frames using gstCamera (my case is a CSI camera). As far as I could see from gstCamera code, the data is already treated as device memory, but the format returned is uchar3*, which contains pixel data in x,y,z (x:R, y:G, z:B ?) format.
I would like to process this data using VPI algorithms, therefore, as far as I understood from VPI doc, requires data to be in VPIImage format. It’s not clear as well if data must be locked (vpiImageLock) when coming from videoSource, like shown on VPI samples.
I am currently not able to understand how to make this wrapping, from uchar3* to VPIImage + VPIImageData.
For those who might interest, I managed to make it work by using the following code, escaping safety checks:
...
//Init jetson-utils videoSource
camera = videoSource::Create(input_res.c_str(), camera_options);
// capture frame:
uchar3* image = NULL;
camera->Capture(&image);
// Manually fill VPIImageData with your video params
VPIImageData vpiImageData;
vpiImageData.format = VPIImageFormat::VPI_IMAGE_FORMAT_RGB8;
vpiImageData.numPlanes = 1;
vpiImageData.planes[0].data = image;
vpiImageData.planes[0].width = 1280;
vpiImageData.planes[0].height = 720;
vpiImageData.planes[0].pitchBytes = 3 * 1280;
vpiImageData.planes[0].pixelType = VPIPixelType::VPI_PIXEL_TYPE_3U8;
// Use vpiImageData in your vpi algos
I am able to export the contents to OpenCV’s Mat to check for data consistency and it seems to be working well, except there is a color inversion between planes R and B, how to solve it? I tried changing VPIImageData.format to VPI_IMAGE_FORMAT_BGR8 but it does not fix it, maybe its a problem with color conversion on gstManager?
Using the following code to view the stream in OpenCV:
The python examples unhappily are not useful since the data first goest through numpy, leading to the use of a completely different API, that does not exist in C++.
I managed to view the image with the code from my last reply, (Filling up VPIImageData manually) but the color channels R and B are inverted when exported to cv::Mat, I would really appreciate some help with this matter.
Please help, I am strugling a lot trying to understand how to do the uchar3* to VPIImage conversion.
Is VPIImage an object or just a pointer to image data (pixels in a given image format)?
When should vpiLockImage be used? What is its purpose, considering the shared memory architecture of jetson boards?
The description of jetson-utils usage with VPI is pretty inexistent in C++, but the documentation encourages the usage of videoSource, so please, help with that.
If this is not the correct channel for development support, please point which one is.
@_padreco the uchar3* from jetson-utils is just a pointer to CUDA memory. VPIImage is a struct, and you should be able to use vpiImageCreateWrapper() with VPI_IMAGE_BUFFER_CUDA_PITCH_LINEAR:
Thanks for the reply, it clarified a lot, and thanks for the python example, but my application requires C++ usage.
I am currently using VPI version 1.2 (latest for Jetson Nano), which does not have the method you suggested, but it has the method vpiImageCreateCUDAMemWrapper which I was currently trying to use to wrap data into a VPI Image, but, frpm the code example given, the blue and red channels ended up swapped.
In the end, I decided to go for CUDA compiled OpenCV since some of the functionality I was looking for (RGB to HSV conversion) is missing on VPI libraries.
Thanks for the help and the good work on jetson_utils, and please, whenever possible, try to add examples on VPI + jetson_utils usage.