Implement SecondaryGIE with custom classification model causing Perf to decrease to 0fps

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version Deepstream v5.0
• TensorRT Version TensorRT 7.0
• CUDA Version 10.2
• NVIDIA GPU Driver Version (valid for GPU only) GPU Driver 440.100
• Docker Image I am using deepstream docker image from NGC

I am running a modified version of deepstream-test5 example on 2 rtsp stream from camera. With only PrimaryGIE, the application run smoothly at around 30fps each (perf-measurement). I want to implement a simple custom model that I have deserialized to .engine model as a SecondaryGIE, and get output as tensor-metadata.

The deepstream app compile without fail, and when running the app, pipeline was created successfully. However, while at first the fps is around 25~30, gradually this measurement will reach 0.0, and will not go up. Some of the times this happen right after starting pipeline, while some other times this may happen after 10~30mins of running deepstream.

I have changed the config file as well as adding a bit of code into deepstream_test5_app_main.c to get tensor-metadata (which I learn from deepstream_infer_tensor_meta example). In the config file for secondaryGIE, I disable classifier-async-mode, since I need the tensor metadata often, and have noticed that turning this on allow deepstream to run with no problem also.

I have include all files that I have modify in the drive, including my model engine. In deepstream_test5_app_main.c I only added a bit in function generate_event_msg_meta (from around line 526): https://drive.google.com/drive/folders/1ZfSFGEoobICD6O685w4WLckf-Um7VKhC?usp=sharing

Is there some sort of performance issue or memory issue with the SecondaryGIE? Or is there simply something wrong with my config properties?

Sorry for the late reply, have you solved the problem, we cannot access the google drive now.

Check if sink=0 in config file, batch push timeout should be 33ms, check if users model is creating problem