Exported model can't move to jetson tx2

It is not related to the trt version of the docker. Because you will run inference in TX2. So, you need to copy etlt model to TX2, and then

  1. generate trt engine in TX2 via tao-converter or
  2. let the deepstream generate trt engine.

We just build trt oss plugin in order to replace the libnvinfer_plugin.so.
Please try to run GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream with the official demo etlt models.

Refer to below topic, I rebuild trt oss plugin and replace it in NX.

More reference, see YOLOv4 - NVIDIA Docs

1 Like