Please provide complete information as applicable to your setup.
PC: RTX 2070 Super
Deepstream: 6.2
Driver Version: 525.105.17
Docker image: deepstream:6.2-devel
I am trying to run the sample app deepstream-opencv-test. I have read the instructions and installed gst-dsexample as needed. Afterwards I build deepstream-opencv-test and everything works as expected, with the bounding boxes being blurred. However the fps is quite low. I then change the eglsink to a fake sink, and use the video file /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
if(prop.integrated) {
sink = gst_element_factory_make ("nv3dsink", "nvvideo-renderer");
} else {
//sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
sink = gst_element_factory_make ("fakesink", "fake-renderer");
}
this particular video takes about 60 seconds to complete with fake sink.
However if I create the pipeline using gst-launch with the same video and same pgie, it only takes 10 seconds to complete.
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-opencv-test/dsopencvtest_pgie_config.txt ! nvvideoconvert nvbuf-memory-type= nvbuf-mem-cuda-unified ! 'video/x-raw(memory:NVMM), format=RGBA' ! dsexample full-frame=0 blur-objects=1 ! nvdsosd ! fakesink
Thus i’m curious how the c++ pipeline is quite slow.