TRT conversion problem

Hello,

I have a Jetson Xavier with my own c++ program using Tensorrt 5.1.5.0.
For some reasons, I would like to keep this trt version.

I trained a new model with the last TAO toolkit and I would like to use it in my program.
On the TAO main page, only a few tao-converter versions are provided. The oldest one is Jetpack 4.4 with Tensorrt 7.1.3.

  • does tao-converter using Tensorrt 5.1.5.0 exists ? If yes, is it compatible with the current last TAO toolkit ?
  • if I use a old tlt-converter from the time of Tensorrt 5.1.5.0, is it the correct process ? By the way, where could I find them ?

Thanks

Please refer to the oldest version of tlt/tao user guide.
https://docs.nvidia.com/tao/tao-toolkit-archive/tlt-10/tlt-getting-started-guide/index.html#deepstream_deployment

For the Jetson platform, the tlt-converter for JetPack 4.2.2 and JetPack 4.2.3 / 4.3 is available to download in the dev zone.

It depends on different network. You can have a try.

There are two ways for generating tensorrt engine.
One is to use tlt-converter
Another is let deepstream-app to generate tensorrt engine.

Thank you for your help.

In order to convert my model trained with the last TAO , I tried
1 - tlt-converter 5.1 given in your link
I can generate the engine without errors, but I get bad results (a lot of bounding boxes everywhere in the image).
2 - tao-converter for tensorrt 7.2, linked with tensorrt 5.1 => undefined symbol, I can’t run the converter

=> do you have any ideas ?
Is the only solution to train with an old Transfer Learning toolkit version, and therefore with the “old models” ?

I have a second question. If I want to use tensorrt 7.2 on my Jetson, I need to update it with Jetpack in order to have the correct driver version, is that right ?
Or is it possible to get the Tensorrt 7.2 binaries for Jetson , compile my c++ program with it and run it on the Jetson with the old Jetpack 4.2 (old driver, …) ?

Thanks,

Which network did you train? How about the inference result with “tao xxx inference xxx”?

Usually it is if update TRT5 to TRT7. Because BSP version is different. It is suggested to flash board via sdkmanager.

I trained a faster rcnn resnet 18.

Inference results are good using the inference section from TAO notebook (using etlt file).
Inference with tensorrt is fine on my desktop computer with tensorrt 7.2, using the tao-converter given on TAO main page.

Therefore, my problem is to make inference with an old tensorrt on Jetson, from an etlt generated with new TAO.

Is the only solution to train with an old Transfer Learning toolkit version, and therefore with the “old models” ?

Some tips, please try either.

  1. Update board with Jetpack/sdkmanager
  2. Pull and run old version of l4t-base docker, NVIDIA L4T Base | NVIDIA NGC
    Then inside the docker, use tlt-converter to generate tensorrt engine.
  3. Pull and run old version of deepstream-l4t docker, DeepStream-l4t | NVIDIA NGC
    Then inside the docker, use tlt-converter to generate tensorrt engine or use deepstream-app to run GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tlt2.0

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.