Segmentation fault when sending message through rabbitmq

I’ve tried your code with the onnx model you send. The onnx model does not match deepstream detector definition. The default detector post-processing needs two output layers. Please refer to the code in /opt/nvidia/deepstream/deepstream-5.1/sources/libs/nvdsinfernvdsinfer_context_impl_output_parsing.cpp.

Your model layers:
0 INPUT kFLOAT input_1:0 3x160x160 min: 1x3x160x160 opt: 4x3x160x160 Max: 4x3x160x160
1 OUTPUT kFLOAT Bottleneck_BatchNorm/batchnorm_1/add_1:0 128 min: 0 opt: 0 Max: 0

the onnx file and the config file work with me but the problem came when I use msgbrocker.
I tried the code with fakesink and the problem gone.

When I tried to add multistream with msgbrocker the segmentation fault came again even with fakesink. Also I tried with AGX but still the same problem

It can not work, as your config it as detector and use default bbox parser, but the model only have one output layer, it will not work. The paser code is open source, please refer to it.

Do you mean for PGIE or SGIE? Because I made post-processing on SGIE

There is only one GIE in your current code: deepstream_test4/deepstream-test4c at main · xya22er/deepstream_test4 · GitHub

I did not give this repo

I gave you this:

I’ve tested your GitHub - xya22er/facenet_deepstream-test4, since there is no scikit-learn in our enviroment, the code of Normalization is removed. The facedetection model is TLT2.0 face detection model. (refer to /opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/README)

The results seem OK. No crash.

deepstream_test_4.py (30.8 KB)
face_detector_config.txt (2.8 KB)
face_recogniser_config.txt (3.5 KB)

Is it work with back-to-back detector?

back-to-back is PGIE+SGIE, so it should work.

We encourage customers to use deepstream with c/c++ for performance and debugging reasons.

I used back-to-back detector.

The code that I gave you not work on my device until I change sink to fakesink.

Many customers have applied different models with back-to-back sample. You need to debug by yourself to find the reason of failing. It is not reasonable to ask us to debug your code.

I told you before I test the models on test2 and multistream example and work without any problem. The problem came when I applied to test4

I’ve tested your code, except scikit-learn, I can not reproduce the failure. And if test2 sample works, back-to-back should work too, they are very similar.

Does it work well or you got segmentation fault error?

No. There is no segmentation fault.

Does it work with display?

Yes. It can work.

I tried to use your change, but the app only work with no-display. Why?
I tried on Nano, NX and AGX and the problem same

I can reproduce segment fault with the code. It has nothing to do with the model. The segmentation fault is caused by your implementation of generate_face_meta. Every field of NvDsFaceObject should be assigned but you only assigned “apparel”, “gender” and “age”. This meta will be used in other modules such as nvmsgconv, so it will cause problem.

The crash disappear after assigning every field.

def generate_face_meta(data, predicted_name, confidence, base64_predicted_image):
# obj = pyds.NvDsFaceObject.cast(data)
obj = pyds.NvDsPersonObject.cast(data)
obj.apparel = “predicted_name”
obj.cap = “none”
obj.hair = “black”
#confidence
# print("confidence ", confidence)
obj.age = 1
#image
obj.gender = “base64_predicted_image”
return obj

deepstream_test_4_cust.py (31.3 KB)

Please follow our sample code when you try to implement new function.