Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) : Jetson Orin Nano 4GB • DeepStream Version : 6.1.1 • JetPack Version (valid for Jetson only) : 5.0.2 • TensorRT Version: 5.0.2 • NVIDIA GPU Driver Version (valid for GPU only): 11.4 • Issue Type( questions, new requirements, bugs)
I am recording and inference rtsp streams using deepstream python. For inference I am using yolov7-tiny. I can only able to achieve 12 rtsp streams. if I increase to 13 or 14, fps is low like 18 to 19.
I also tried with Jetson Orin Nano 8gb module, in that I can achieve 14 streams, but can’t able to increase further.
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
create pipeline in the following order, nvstreammux->nvinfer->nvvideoconvert->capsfilter->nvstreamdemux->queue->nvvideoconvert->nvv4l2h264enc->h264parse->splitmuxsink
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
is the above pipeline is correct or not, if correct, why I can’t able to run more than 12 streams, is there a any limitation
kindly please help me to resolve this issue
Can you check check the output of command “tegrastats”? It looks to be a bottleneck on some hardware component.
As there is no hardware encoder in Orin Nano, I’m wondering what the behavior is for plugin “nvv4l2h264enc” (sorry that I don’t have a Orin Nano to test it)
There is no HW encoder in Orin Nano. Your pipeline does not work with Orin Nano. To record with software encoder will take a lot of CPU resource, so the performance will be very poor.
For generate RTSP streams, please refer to GStreamer document and community. GStreamer: open source
There is also RTSP output sample in pyds sample, please replace the HW encoder with SW encoder if you want to use in Orin Nano.