Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Deepstream version 6.1.1 docker container
I am running deepstream_imagedata-multistream app using yolov5l.
Outside the deepstream container in my local we are getting 111 FPS as inferencespeed for one frame.
But in that application with one source video its showing only 33 FPS in deepstream.
Any reason for this?
Could you describe how to get the 111 fps and 33 fps in detail?
Are you saying that with the same code, the fps can reach 111 in the docker but 33 on the host?
TO get 33 fps im just running the imagedata multistream application with one video in the docker container,
Outside the docker container we are inferencing one single image with yolov5 tensorrt model and timing how long it takes to infer that one image. That speed turns out to be 111fps.
This comparison method doesn’t have much reference value. The deepstream_imagedata-multistream app does lots other things. It transfers the buffer from GPU to CPU to draw the bbox and save images. The video decoder also cost mush time than the image decoder.
If you want to make a valid comparison, you need to use the similar pipeline with similar sources.
Yes. You can just change the h264parser to the plugin corresponds to your image, like pngparse, jpegparse.
Remove the osd_sink_pad_buffer_probe.
Change the sink plugin to fakesink.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks