Exported model can't move to jetson tx2

Please provide the following information when requesting support.

• Hardware jetson tx2
• Network Type resnet18+unet
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec fileunet_train_resnet_unet_isbi.txt (2.8 KB)

• How to reproduce the issue ? when moving the trained model to tx2 the problem occured

and the spec file of my trained model is unet_train_resnet_unet_isbi.txt (2.8 KB)

the pgie_unet_config_filepgie_unet_tlt_config(3).txt (2.1 KB)

i was using above website as reference

Please try with GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream .

thank you for your reply, i have followed the readme. But I think the problem is I generated the engine file with trt7.2.3(tao3.0 ) but the trt oss plugin doesn’t seems to have supported version
image

is there a way to upgrade the trt version in the plugin or downgrade the trt version that in the docker?

It is not related to the trt version of the docker. Because you will run inference in TX2. So, you need to copy etlt model to TX2, and then

  1. generate trt engine in TX2 via tao-converter or
  2. let the deepstream generate trt engine.

We just build trt oss plugin in order to replace the libnvinfer_plugin.so.
Please try to run GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream with the official demo etlt models.

Refer to below topic, I rebuild trt oss plugin and replace it in NX.

More reference, see YOLOv4 - NVIDIA Docs

1 Like

so i can’t just run through this

and use the engine file it generated?

Yes,
Machine specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the inference environment’s TensorRT or CUDA libraries are updated – including minor version updates or if a new model is generated– new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether.

1 Like

thanks for the reply, i have followed your advice and run the tao-converter in tx2 but now the problem
has occurred

can’t understand why it’s related to ascii ?

i solve it by fixing the key value, not careful enough, my bad. Thank you for your reply, appreciated!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.