Launching customized .etlt model on jetson

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : Jetson Xavier NX
• DeepStream Version: 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1-1+cuda10.2

Hello,

I have an .etlt model that was trained based on Yolo3 with TAO (v3.21.11) toolkit on a Nvidia EC2 instance with CUDA 11.2. The output files are the .etlt model and the calibration file. I used the code in deepstream-tao_apps to do inference on jetson with my new model, but it shows the errors as below.

My configuration file is:

[property]

gpu-id=0

net-scale-factor=1.0

offsets=103.939;116.779;123.68

model-color-format=1

labelfile-path=my_yolo_labels.txt

model-engine-file=…/…/deepstream_tao_apps/models/tao_model/saved.engine

int8-calib-file=…/…/deepstream_tao_apps/models/tao_model/cal.bin

tlt-encoded-model=…/…/deepstream_tao_apps/models/tao_model/yolov3_resnet18_epoch_002.etlt

tlt-model-key=nvidia_tlt

infer-dims=3;544;960

maintain-aspect-ratio=1

uff-input-order=0

uff-input-blob-name=Input

batch-size=1

network-mode=1

num-detected-classes=4

interval=0

gie-unique-id=1

is-classifier=0

#network-type=0

#no cluster

cluster-mode=3

output-blob-names=BatchedNMS

parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT

custom-lib-path=…/…/post_processor/libnvds_infercustomparser_tao.so

[class-attrs-all]

pre-cluster-threshold=0.3

roi-top-offset=0

roi-bottom-offset=0

detected-min-w=0

detected-min-h=0

detected-max-w=0

detected-max-h=0

Any suggestion would be helpful. Thanks.

From log - “failed to parse ONNX model”, seems the tao file is broken, can you try to use tao-converter to convert the TAO model to TRT engine?

Thanks for your reply. I will try the conversion.

After replacing the key with the exact API key instead of “nvidia_tlt”, and making the following change:
infer-dims=3;544;960
It is now running without error, although the performance needs improvement.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.