I’ve tried your code with the onnx model you send. The onnx model does not match deepstream detector definition. The default detector post-processing needs two output layers. Please refer to the code in /opt/nvidia/deepstream/deepstream-5.1/sources/libs/nvdsinfernvdsinfer_context_impl_output_parsing.cpp.
It can not work, as your config it as detector and use default bbox parser, but the model only have one output layer, it will not work. The paser code is open source, please refer to it.
I’ve tested your GitHub - xya22er/facenet_deepstream-test4, since there is no scikit-learn in our enviroment, the code of Normalization is removed. The facedetection model is TLT2.0 face detection model. (refer to /opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/README)
Many customers have applied different models with back-to-back sample. You need to debug by yourself to find the reason of failing. It is not reasonable to ask us to debug your code.
I’ve tested your code, except scikit-learn, I can not reproduce the failure. And if test2 sample works, back-to-back should work too, they are very similar.
I can reproduce segment fault with the code. It has nothing to do with the model. The segmentation fault is caused by your implementation of generate_face_meta. Every field of NvDsFaceObject should be assigned but you only assigned “apparel”, “gender” and “age”. This meta will be used in other modules such as nvmsgconv, so it will cause problem.