I followed [Preformatted text](https://elinux.org/index.php?title=Jetson/L4T/TRT_Customized_Example#Custom_Parser_for_SSD-MobileNet_Trained_by_Jetson-inference)
guide to use deepstream jetson-inference custom model.
My device is JP 4.5 with DS 5.1 on Xavier NX
Sharing my config:
[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0
model-engine-file=ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
onnx-file=ssd-mobilenet.onnx
labelfile-path=labels.txt
infer-dims=3;300;300
uff-input-order=0
uff-input-blob-name=Input0=FP32, 1=INT8, 2=FP16 mode[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0
model-engine-file=ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
onnx-file=ssd-mobilenet.onnx
labelfile-path=labels.txt
infer-dims=3;300;300
uff-input-order=0
uff-input-blob-name=Input
0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=MarkOutput_0
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so
#scaling-filter=0
#scaling-compute-hw=0
[class-attrs-all]
threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
Per class configuration
#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800
network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=MarkOutput_0
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so
#scaling-filter=0
#scaling-compute-hw=0[class-attrs-all]
threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0Per class configuration
#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800
My error after running the script:
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 4 output network tensors.
0:16:01.296449381 18313 0x36368a00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_SSD/ssd-mobilenet.onnx_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_0 3x300x300
1 OUTPUT kFLOAT scores 3000x3
2 OUTPUT kFLOAT boxes 3000x4ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: MarkOutput_0
0:16:01.304274698 18313 0x36368a00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1670> [UID = 1]: Could not find output layer ‘MarkOutput_0’ in engine
0:16:01.329662584 18313 0x36368a00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_SSD/config_infer_primary_ssd.txt sucessfully
Decodebin child added: sourceDecodebin child added: decodebin0
Decodebin child added: decodebin1
Decodebin child added: rtppcmudepay0
Decodebin child added: rtph265depay0
Decodebin child added: mulawdec0
In cb_newpad
Decodebin child added: h265parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 279
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 279
In cb_newpadCould not find NMS layer buffer while parsing
0:16:03.730012257 18313 0x3642fe80 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:724> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault