Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) - jetson nano
• DeepStream Version - 6.0.1
• JetPack Version (valid for Jetson only) - 4.6.3
• TensorRT Version - 8.4
• NVIDIA GPU Driver Version (valid for GPU only) - 10.2
• Issue Type( questions, new requirements, bugs) - when i run ultra_light_320.onnx getting error like this Failed to parse bboxes
Segmentation fault (core dumped) and how to resolve this issue
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Getting error:
jetson@ubuntu:~/Downloads/face_detection_1$ python3 face_detection.py
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Creating nv3dsink
Atleast one of the sources is live
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
Starting pipeline
ERROR: Deserialize engine failed because file path: /home/jetson/Downloads/face_detection_1/ultra_light_320.onnx_b1_gpu0_fp16.engine open error
0:00:03.420459913 14184 0x19673c90 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/jetson/Downloads/face_detection_1/ultra_light_320.onnx_b1_gpu0_fp16.engine failed
0:00:03.421663423 14184 0x19673c90 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/jetson/Downloads/face_detection_1/ultra_light_320.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:03.421710195 14184 0x19673c90 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
0:03:53.427831781 14184 0x19673c90 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/jetson/Downloads/face_detection_1/ultra_light_320.onnx_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x240x320
1 OUTPUT kFLOAT scores 4420x2
2 OUTPUT kFLOAT boxes 4420x4
ERROR: [TRT]: 3: Cannot find binding of given name: scores/boxes
0:03:53.440351734 14184 0x19673c90 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 1]: Could not find output layer ‘scores/boxes’ in engine
0:03:53.498925130 14184 0x19673c90 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:deepstream_pgie_config_facenet.txt sucessfully
Decodebin child added: source
**PERF: {‘stream0’: 0.0}
Decodebin child added: decodebin0
Decodebin child added: rtph265depay0
Decodebin child added: h265parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 279
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 279
In cb_newpad
NVPARSER: HEVC: Seeking is not performed on IRAP picture
NVPARSER: HEVC: Seeking is not performed on IRAP picture
0:03:57.351834939 14184 0x1960a320 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:03:57.351925930 14184 0x1960a320 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)
configuration file is:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#tlt-model-key = nvidia_tlt
onnx-file = ultra_light_320.onnx
model-engine-file = ultra_light_320.onnx_b1_gpu0_fp16.engine
labelfile-path= labels.txt
infer-dims=3;240;320
#uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=1
cluster-mode=2
interval=0
gie-unique-id=1
output-blob-names=scores/boxes
[class-attrs-all]
topk=20
nms-iou-threshold=0.2
pre-cluster-threshold=0.2
and also added custo-lib-path
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#tlt-model-key = nvidia_tlt
onnx-file = ultra_light_320.onnx
model-engine-file = ultra_light_320.onnx_b1_gpu0_fp16.engine
labelfile-path= labels.txt
infer-dims=3;240;320
#uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=1
cluster-mode=2
interval=0
gie-unique-id=1
parse-bbox-func-name = NvDsInferParseCustomONNX
custom-lib-path = libnvds_infercustomparser.so
output-blob-names=scores;scores
[class-attrs-all]
topk=20
nms-iou-threshold=0.2
pre-cluster-threshold=0.2
jetson@ubuntu:~/Downloads/face_detection_1$ python3 face_detection.py
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Creating nv3dsink
Atleast one of the sources is live
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
Starting pipeline
0:00:08.567133869 18047 0x20f05890 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/jetson/Downloads/face_detection_1/ultra_light_320.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x240x320
1 OUTPUT kFLOAT scores 4420x2
2 OUTPUT kFLOAT boxes 4420x4
0:00:08.574908531 18047 0x20f05890 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/jetson/Downloads/face_detection_1/ultra_light_320.onnx_b1_gpu0_fp16.engine
0:00:08.625772096 18047 0x20f05890 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initResource() <nvdsinfer_context_impl.cpp:772> [UID = 1]: Detect-postprocessor failed to init resource because dlsym failed to get func NvDsInferParseCustomONNX pointer
ERROR: Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
ERROR: Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
0:00:08.676586807 18047 0x20f05890 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:08.676642016 18047 0x20f05890 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: deepstream_pgie_config_facenet.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
**PERF: {‘stream0’: 0.0}
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: deepstream_pgie_config_facenet.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Exiting app