Error when running in multiple RTSP source

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only) CUDA 10.2

Hi there,
Im following deepstream-test3 example to build face detection system. I changed default model to face detection model in config path. When i run only rtsp source it work well but i add another rtsp source it raise error in below.

this is config file.
I dont know that why. How can i solve it? please.

Where did you get the engine file? From the name of the engine file, the batch size is 1, but the config file set the batch size as 32. Does your model support explicit batch size or implicit batch size? Seems the engine you are using can only support batch size 1 but nvinfer required batch size 2 capability. What is your nvstreammux batch size for the two rtsp sources? Can you try to change the batch size to 1 in your nvinfer config file?

Hi Chen.
You right, i builded engine file again witch batch size as 32 and it work well.
Thanks a lot.

Hi Chen.
I follow back-to-back detector, i run it successfuly. However I want to run with multipe input source. I changed the source code and it work when i run only one source but it dont work when i set more than one source, it raise the below error.

I see that the error in
gst_nvinfer_output_loop function in gstnvinfer.cpp at lines from 1966 to 1981

Do you know that why and how can i fix it ? please help me.

One topic for one issue. Can you create new topic for new issue?

Ok. I created new topic Back-to-back detector with more than one source please help me if you know.