Im implementing some computer visions algorithms on JetsonTX1 with purpose to accelerate processing speed.
I used FrameSource API on JetsonTX1 to capture camera, but it doesn’t work as I expected.
It only captured approximately 18 to 20 fps. It’s quite slow.
When I dig deeply into the NVIDIA Implement Source Code for capturing camera. I realize that they use a gstreamer pipeline: v4l2src device=/dev/video* ! videoconvert ! capsfilter ! appsink to read camera source and I know why.
The reason is videoconvert that dont have good performance and cannot accelerate hardware.
I wanna get 60 frames/second. So,
I wrote a pipeline to read camera source by my own, I replace videoconvert by nvvidconv instead.
The pipeline looks like: v4l2src device=/dev/video* ! nvvidconv ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, framerate=60/1, formate=I420’ ! appsink sync=false async=false.
I added the callback to listen the event when data go to appsink. And pull data from this one.
But when I got the map data (MapDataInfo), I found that map data has very small size and I dont know how to convert it into cv::Mat (segmentation fault when I create cv::Mat like cv::Mat(height, width, map.data)).
Im doing in the right way???
Any suggestion for my issue.
Many thanks !