I’m using the following post from the forum to access the frame,
https://devtalk.nvidia.com/default/topic/1065395/deepstream-sdk/access-video-frame-and-use-opencv-to-draw-on-it-in-pad-probe-callback/post/5397089/#5397089
However I had to modify my code to get the frames to correct channel format, the following is the modified code,
cv::Mat frame = cv::Mat(frame_height *3/2, frame_width, CV_8UC1, frame_data);
cv::Mat src_mat_BGRA;
cv::cvtColor(frame, src_mat_BGRA, CV_YUV2BGRA_NV12);
If I make the above changes, I could store the frames in jpg format locally, here’s an image attached below,
https://imgur.com/a/SjKRM4T
If I use the code as is, I get the following image written down locally,
https://imgur.com/a/SwJtZsh
Any idea as to how to access the frames correctly using opencv.
However the issue that I’m facing is that if I use the same modified code on the nano, the channels get messed up and I get a corrupted image with channels swapped.
Hi,
frame_data is in NVBUF_COLOR_FORMAT_NV12. You may convert it to RGBA before using OpeCV APIs. Please refer to 5 in FAQ
@DaneLLL thank you the solution from the faq worked but I wanted to know how to incorporate one tiny change in it. Basically when you implement it, it looks like it’s written for a single source, but what if you would want to do the same for multiple sources.
For eg when you have a probe after tracker done,
for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
Basically the above line iterates through frames from multiple sources, if it is a single source the above code would work. Any directions on these? Thanks.
Hi,
Please check
https://devtalk.nvidia.com/default/topic/1061205/deepstream-sdk/rtsp-camera-access-frame-issue/post/5379662/#5379662
For multiple sources, you should configure batch-size same as source number and check frame_meta->batch_id