Retinanet trained using TLT not deployable with DS-5.0 on a jetson nano

Hi,
I have used the transfer learning toolkit [docker image: nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3] to train a retinanet model and exported it to an etlt version to deploy using DEEPSTREAM-5.0

The output of dpkg -l | grep -i tensor on this TLT image is given below.


root@2a17691a9683:/workspace# dpkg -l | grep -i tensor
ii libnvinfer-dev 7.0.0-1+cuda10.0 amd64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 7.0.0-1+cuda10.0 amd64 TensorRT plugin libraries
ii libnvinfer-plugin7 7.0.0-1+cuda10.0 amd64 TensorRT plugin libraries
ii libnvinfer7 7.0.0-1+cuda10.0 amd64 TensorRT runtime libraries
ii libnvonnxparsers-dev 7.0.0-1+cuda10.0 amd64 TensorRT ONNX libraries
ii libnvonnxparsers7 7.0.0-1+cuda10.0 amd64 TensorRT ONNX libraries
ii libnvparsers-dev 7.0.0-1+cuda10.0 amd64 TensorRT parsers libraries
ii libnvparsers7 7.0.0-1+cuda10.0 amd64 TensorRT parsers libraries


Now, I am trying to use this model on a Jetson NANO that runs jetpack 4.4 with deepstream 5.0


nvidia@nvidia-desktop:~/DS-CURBSENSOR$ jetson_release

  • NVIDIA Jetson Nano (Developer Kit Version)
    • Jetpack 4.4 [L4T 32.4.3]
    • NV Power Mode: MAXN - Type: 0
    • jetson_clocks service: inactive
  • Libraries:
    • CUDA: 10.2.89
    • cuDNN: 8.0.0.180
    • TensorRT: 7.1.3.0
    • Visionworks: 1.6.0.501
    • OpenCV: 4.1.1 compiled CUDA: NO
    • VPI: 0.3.7
    • Vulkan: 1.2.70

However, the TensorRT version used to export the model is different to the one trying to create the engine from this etlt model.
When I try to run a sample app with this etlt model, it gives me this error:


WARNING: INT8 not supported by platform. Trying FP16 mode.
ERROR: [TRT]: UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API


Please tell me how I am supposed to use this model on a NANO?

The DS-5.0 works on JP 4.4 with L4T 32.4.3 that has TRT 7.1.3 and the TLT image produces models that can only be parsed if your TRT version is 7.0.0.

Please give me a solution for this issue.


Please provide complete information as applicable to your setup.
Hardware Platform : Jetson
DeepStream Version : 5.0
JetPack Version : 4.4
TensorRT Version 7.0, 7.1

Moving this topic into TLT forum.

Please follow GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream and deepstream_tao_apps/TRT-OSS/Jetson at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub to build a Jetson TensorRT OSS Plugin.