Segmentation fault (core dumped)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) T4 GPU (EC2 - g4dn.2xlarge)
• DeepStream Version - 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version - 8.6
• NVIDIA GPU Driver Version (valid for GPU only) - 535.161.08
• Issue Type( questions, new requirements, bugs)
when running deepstream python code in docker container got segmentation error.
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I have attached the log file. could you please help us to resolve this issue?
ds-logs.zip (13.1 MB)

1.Which sample did you use to reproduce the problem? From this log, I can’t find any crash information.

2.If you run an application you created, try to use gdb to get crash stack.

gdb --args python3 "your_py_application.py"
  1. I am using deepstream-test-3.py with the PeopleNet model.

  2. I have run the code with GDP and attached the log.
    ds-log.txt (8.5 KB)

How did your build the engine file ? Did you copy it from other device ?

ERROR: [TRT]: 3: Cannot find binding of given name: output_cov/Sigmoid
0:00:13.612238293   646 0x641c22c70120 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2062> [UID = 1]: Could not find output layer 'output_cov/Sigmoid' in engine
ERROR: [TRT]: 3: Cannot find binding of given name: output_bbox/BiasAdd
0:00:13.613924161   646 0x641c22c70120 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2062> [UID = 1]: Could not find output layer 'output_bbox/BiasAdd' in engine
0:00:13.613939272   646 0x641c22c70120 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /home/tentovision/peoplenet_2.3.3/resnet34_peoplenet_int8.onnx_b16_gpu0_int8.engine

No, it’s built on this instance during the first run.

PeopleNet version pruned_quantized_decrypted_v2.3.3

1.Have you changed the output-blob-names in the configuration file? The correct value should be as follows

output-blob-names=output_bbox/BiasAdd:0;output_cov/Sigmoid:0

2.If you use a sample stream test, can you reproduce the problem?
I think the problem may be related to your http stream.

python3 deepstream_test_3.py -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 --pgie nvinfer -c config_infer_primary_peoplenet.txt --no-display

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.