[Gstreamer - JetsonTX1] Capture source on GPU JetsonTX1

Hi guys,
Im implementing some computer visions algorithms on JetsonTX1 with purpose to accelerate processing speed.
I used FrameSource API on JetsonTX1 to capture camera, but it doesn’t work as I expected.
It only captured approximately 18 to 20 fps. It’s quite slow.
When I dig deeply into the NVIDIA Implement Source Code for capturing camera. I realize that they use a gstreamer pipeline: v4l2src device=/dev/video* ! videoconvert ! capsfilter ! appsink to read camera source and I know why.
The reason is videoconvert that dont have good performance and cannot accelerate hardware.
I wanna get 60 frames/second. So,
I wrote a pipeline to read camera source by my own, I replace videoconvert by nvvidconv instead.
The pipeline looks like: v4l2src device=/dev/video* ! nvvidconv ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, framerate=60/1, formate=I420’ ! appsink sync=false async=false.
I added the callback to listen the event when data go to appsink. And pull data from this one.
But when I got the map data (MapDataInfo), I found that map data has very small size and I dont know how to convert it into cv::Mat (segmentation fault when I create cv::Mat like cv::Mat(height, width, map.data)).

Im doing in the right way???
Any suggestion for my issue.

Many thanks !

Regards,
Trung.

Hi AsleyTrung, what is the data format of your v4l2 source?

Dear DaneLLL,
I set in the driver for TC358743 (B102) chip is UYVY format and 60 fps.

Regards,
Trung.

Hi Trung,
I think you can make a full OpenCV solution. I believe OpenCV supports v4l2 source but not sure how to do it. PLease try to get help here http://answers.opencv.org/questions/

Also it starts to support UYVY in appsink from OpenCV 3.3.
( source: https://github.com/opencv/opencv/blob/master/modules/videoio/src/cap_gstreamer.cpp )

You can follow http://dev.t7.ai/jetson/opencv/ to install 3.3 and run

v4l2src ! video/x-raw,format=UYVY,width=1920,height=1080 ! appsink

Dear DaneLLL;
I still haven’t understanted about nvvidconv with memory:NVMM option yet.
When I use v4l2src ! videoconvert ! capsfilter ! appsink and listen event to pull data, everything works fine but slows.
So, when I change nvvidconv instead videoconvert. If I use capsfilter with memory:NVMM option like:
v4l2src ! nvvidconv ! ‘video/x-raw(memory:NVMM), …’ ! appsink. I listened a very small buffer (map size is 776) at appsink event. Even, when I only use ‘video/x-raw, …’, the stream didnt work. The error showed that internal data flow error.
I inspected nvvidconv by gst-inspect, I find that both src and sink template support video/x-raw, and video/x-raw(memory:NVMM) with almost same formats.
I dont know why when I use ‘vide/x-raw(memory)’, pipeline at least still worked and other one couldnt work with data internal error. (Both case I use same format that allowed by nvvidconv template spec)

Even, when I install opencv 3.3 on jetsontx1 and make a full opencv solution as you said. I still receive errors. Output said that:
[b][i]OpenCV Error: Unspecified error (GStreamer: cannot find appsink in manual pipeline
) in cvCaptureFromCAM_GStreamer, file /home/ubuntu/opencv/modules/videoio/src/cap_gstreamer.cpp, line 796
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/ubuntu/opencv/modules/videoio/src/cap_gstreamer.cpp:796: error: (-2) GStreamer: cannot find appsink in manual pipeline
in function cvCaptureFromCAM_GStreamer
[/i][/b]

What solutions for me??

Regards,
Trung.

Hi Trung,
video/x-raw(memory:NVMM) is dma buffer. If you need HW functionality, you need nvvidconv to convert CPU buffers into dma buffers.

Your usecase is pure CPU processing so you don’t need to use nvvidconv.

Other posts about OpenCV on TX1/TX2:
https://devtalk.nvidia.com/default/topic/987537/videocapture-fails-to-open-onboard-camera-l4t-24-2-1-opencv-3-1/
https://devtalk.nvidia.com/default/topic/1024245/jetson-tx2/opencv-3-3-and-integrated-camera-problems-/post/5210735/#5210735