Segmentation fault when using secondary classifier

I can only tell you what to do with the information you provided. According to the log you post, you use deepstream sample detector model as SGIE and try to treat it as classifier model with “is-classfier=1”. So this is the reason your “Test3” fails.
Your log for your “Test3” shows the model has two output layers and the input layer width is 640, height is 368, channel is 3

The document Gst-nvinfer — DeepStream 5.1 Release documentation has mentioned “input-object-min-width” and “input-object-min-height” are only for SGIE. “input-object-min-width” should be input layer width/16, “input-object-min-height” should be input layer height/16. Jetson HW scaler does not support scaling factor less than 1/16 or larger than 16.

So for this “Test3” you post here, you need to correct your configuration as I mentioned in my last post. This is for your “Test3”.

For your own model, you can use the same method to analysis your configuration.

Can you show me input and output layer dims for you model?

input-object-min-width has nothing to do with “maintain-aspect-ratio”.

The document Gst-nvinfer — DeepStream 5.1 Release documentation has mentioned “input-object-min-width/height” are only for SGIE. They limit the input bbox size because Jetson HW scaler can not process the scaling ratio less tha 1/16 or larger than 16.

this is the output reported from deepstream for the detectnet.
engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/trtis_model_repo/densenet_onnx/1/model.onnx_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data_0 3x224x224
1 OUTPUT kFLOAT fc6_1 1000x1x1

Thanks for your answers, they have helped me understand better because I’ve missed the maximum scalingfactor of 16.
I’ve tried to put the min-object-height and width to 14 (1416=224) and still have the segmentation fault. But I guess the one at 64 does not exceed the boundery of 16x scaling so that can’t be the problem right (643.5=224)? I can say that the onnx model works If i put it into network-type 100. I just find it a bit odd that it’s possible to output both metadata and classifier-data for the 8-bit engine (eg car-color classifier) file but not for the 16-bit onnx file.

It has nothing to do with 8bit or 16bit. The default classifier postprocessing algorithm can not handle your model’s output, you need to set “network-type=100”, remove “is-classifier=1” and implement your own postprocessing. The “classifier-async-mode=1,
classifier-threshold=0.51,output-blob-names=predictions/Softmax” should be removed too.

It is better to post the nvinfer config file you use currently.

Please tell me what the problem is instead of pointing out what it isn’t.
The detect net IS a classifier with one output layer and has 1000 classes (check post above). You have the full config-file above in Test 3. (you have previously wrongly pointed you that it has 2 layers, it’s a problem with the model, but this is nvidias model etc.)

I just took the standard secondary config-file and modified it to use the onnx-model and then it crashes.
Please tell me why this happens.

Do you mean that I should write my own post processing algorithm for every new classifier that I integrate with deepstream? This seems absurd, the data is handled the same for all classifies that uses one-hot encoding. And I have the option to point out the label-file.

Just test the config-file in deepstream test2. You have the model (comes with deepsteram) and explain to me why I can’t enable meta-output together with the classifier data.

Can you post your current modified config file? Are you using deepstream-test2-app or deepstream-infer-tensor-meta-test for testing?

Depends on the model. If it matches our default processing, it can be integrated directly, if it is not, the customization is needed.