Bug in deepstream 5.1 when using custom secondary classifier

• Hardware Platform (Jetson / GPU)
Jetson agx xavier
• DeepStream Version
5.1
• JetPack Version (valid for Jetson only)
4.5.1
• Issue Type( questions, new requirements, bugs)
Bug:
Deepstream crashes and creates segmentation fault or just stops working if using a secondary classified that generates eg an identification array of size 128.
It works just fine when having the same model as a primary classifier.

I’ve tested this both for nvinfer and also nvinferserver to disable post processing with the same result.

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I’ve been using deepstream_test1.py as a base and added a secondary classifier for the detected objects which outputs a signature array.
All code, configuration file and how to create a simple model is located here

There is also described how to run the test and reproduce the bug.
In short: use a model that outputs an 128 size array and add it as a secondary classifier.

Hey customer,
I saw below configs in your sgie config files, could you enable the async mode and disable the output-tensor-meta and see if the issue persist?

classifier-async-mode=0 ==>change to 1
output-tensor-meta=1 ===> change to 0

if output-tensor-meta=0 and classifier-async-mode=1: then the program works.
I get this warning:
0x17b28960 WARN nvinfer gstnvinfer.cpp:957:gst_nvinfer_start: warning: NvInfer asynchronous mode is applicable for secondaryclassifiers only. Turning off asynchronous mode

If output-tensor-meta=0 and classifier-async-mode=0 then the program also works but without the warning
(guess the error is a bit missleading because process-mode=2)

But I need the output-tesnor-meta to make my application to work…

Could you please confirm that this is a bug in deepstream when enabling output-tensor-meta for secondary classifiers. Do I need to make a bugreport some where? will this be fixed? when?
Or how do I enable output-tensor-meta for the classifier without deepstream crashes?

Sorry for the delay, the issue should be caused by the output pool buffer size, we will provide an extra config item in nvinfer to incrase the output buffer pool size.
Currently you can try to increase the batch size of sgie or you can directly change the NVDSINFER_CTX_OUT_POOL_SIZE_FLOW_META to a large value such as 16, 32 etc in gstnvinfer.cpp to see if the issue persist?