Tlt-convert for custom trained YoloV4 model failed on Jetson Nano 4G

Bad news again, when I used the engine file to do the infer on the jetson nano, the model is load successfully,

0:00:09.885840553  8742     0x2d171c70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/samples/models/tlt_pretrained_models/firenet/trt1.engine
INFO: [FullDims Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x608x608       min: 1x3x608x608     opt: 8x3x608x608     Max: 16x3x608x608    
1   OUTPUT kINT32 BatchedNMS      1               min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT BatchedNMS_2    200             min: 0               opt: 0               Max: 0               
4   OUTPUT kFLOAT BatchedNMS_3    200             min: 0               opt: 0               Max: 0               

0:00:09.886070038  8742     0x2d171c70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/samples/models/tlt_pretrained_models/firenet/trt1.engine
0:00:10.090188678  8742     0x2d171c70 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/kai/workspace/firenet/ds-fire-perception/../specs/pgie_yolov4.txt sucessfully

However, the infer failed with error message

ERROR: [TRT]: Assertion failed: status == STATUS_SUCCESS
/home/kai/workspace/TensorRT/plugin/batchedNMSPlugin/batchedNMSPlugin.cpp:246
Aborting...

The config file used in my code is

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=labels.txt
model-engine-file=../models/tlt_pretrained_models/firenet/trt1.engine
tlt-model-key=tlt-encode
infer-dims=3;608;608
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
is-classifier=0
network-type=1
#no cluster
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../models/lib/libnvds_infercustomparser_tlt.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Did you build TRT OSS plugin build mentioned in the tlt user guide?
Reference topic: Convert tensorrt engine from version 7 to 8 - #67 by Morganh

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.