Please provide complete information as applicable to your setup.
**• AGX ( 32G )
**• Jetpack 4.5.1
**• TensorRT 7.1.3
**• Nvidia L4T 32.5.1
• Issue Type( questions, new requirements, bugs)
I have testing the DeepStream pipeline on batch mode and multiples deepstream instance mode .
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
#Required to display the PGIE labels, should be added even when using config-file
- Set number of streams to 24 in the deepstream config file [source0]
- Set batch size to 24 in the [primart-gie ]
- Run DeepStream application with file-loop enabled
- Set number of streams to 1 in the deepstream config file [source0]
- Set batch size to 1 in the primary-gie
- Instantiate 24 individual deepstream instances
Observation: The first method delivers almost twice FPS performance at sink. Is this because the first method takes advantages of parallel execution within the GPU for frames coming from a single muxer (sits between inference engine and decoder), whereas the second method uses preemptive scheduling of GPU for frames coming from 24 individual muxer?