I have build a gst-pipeline with appsink (emit-signals=True).
Device: Jetson Orin Agx 64 GB, Jetpack: 5.1.1, Opencv + CUDA Enabled
This is my pipeline:
thetauvcsrc ! h264parse ! nvv4l2decoder ! nvvidconv ! “video/x-raw(memory:NVMM),format=RGBA” ! queue ! nvvidconv ! nvdewarper config-file=dewarp-file.txt ! m.sink_0 nvstreammux width=600 height=480 batch-size=4 num-surfaces-per-frame=4 ! nvmultistreamtiler width=1920 height=960 ! nvvidconv ! “video/x-raw(memory:NVMM),format=RGBA” ! appsink sync=0 emit-singals=true drop=true
I created a cpp file for this pipeline which is working since I confirmed it with nv3dsink instead of appsink.
Now my main aim is to get frames from nvmm memory and pass it to detectron2’s tensorrt but cv::mat uses cpu.
nvmm memory (gpu) → cv::mat (cpu) → preprocesing (cpu) → detectron2’s tensorrt (gpu)
I was wondering if there is a way I can get frames from nvmm memory and directly send it to tensorrt without using cpu. (Deepstream is perfect for this but right now, customizing deepstream for detectron2 is not something I can to do right now)
I thought of using gpumat but I am not sure it works the way I want it to.
It may be a silly question but I would appreciate any help, thanks!