TF2OD on Jetson Nano 4GB (TF-ONNX-TRT) seemingly not supported due to TensorRT version

I would like to fine tune and deploy an SSD Mobilenet V2 (with FPNLite, 640x640) on a Jetson Nano (4GB). I use the official SD Card image with JetPack4.6 (TensorRT 8.0.1.6).

After days and days of trying to find any kind of information on support for TF2 Object Detection Model Library models, I have found this github repo: https://github.com/pskiran1/TensorRT-support-for-Tensorflow-2-Object-Detection-Models
(which was also recommended on nVidia Developer Forum)

I would like to emphesize that this repo is the only source that documents how float inputs are needed during an exprot to successfuly go through the saved_model → ONNX → TRT conversion and build chain. I could create the ONNX model but building it will always fail due to the EfficientNMS plugin that is not supported by TensorRT 8.0.1.

see related issues on GitHub as in:

My first question: Is this really my problem or TF2OD models should be supported/compatible (and more importantly buildable on the Jetson Nano devices with the newest JetPack4.6)? If It should be compatible (and I am wrong), could you please provide help to build the given model on my nano? (as a demosntration a pretrained model provided in the model zoo from TF2OD library would be prefect)

In case I am right, and my problem is indeed the lack of TensorRT 8.0.1+ on my Jetson Nano device: I have seen that the new JetPack 4.6.1 (with TensorRT 8.2+) is planned to be released nowish. Could you please provide any hints on the release date other than your roadmap (Jetson Roadmap | NVIDIA Developer)?

In case the release is long time from now, could you provide any fallback mechanism as was provided for efficientnet in TensorRT samples?

Or can I also install the new TensorRT individually without replacing my JetPack version somehow? Or waiting for the release is recommended?

Thank you for your help and work in advance!

1 Like

Hi,

Please check the example below:

It will show you how to manager the EfficientNMS to work with TensorRT.
(This require TensorRT v8.0,. which is available on JetPack4.6)

Thanks.

Thank you for your answer. Unfortunately I can not set this as solution, as I didn’t get answers to some of my questions.

Thanks for the effdet sample, as you can see in my original issue I have also found this source. The built engine (of d0 version) has a mean latency of ca. 280ms on the nano which is not enough for us. We specifically switched from rPi4+Coral EdgeTPU because we wanted to have more flexibility and faster inference on higher input resolution models. Using fallback solutions according to the effdet sample you provided will significantly increase the inference time as you also stated in the readme of that repo, and therefore it is not equivalent with full tf2 OD model support that should be provided with TensorRT8.2+

Therefore, it is important for us to have the EfficientNMS support (as in TensorRT 8.2+) on the Jetson Nano devices. Even more as we would like to use another architecture from the tf2 od model zoo and not the efficientdet model.

My main question was: When is it gonna be possible? In other words: when is the release of JetPack 4.6.1 planned at the moment? In your roadmap it is Q4/2021. If the release is halted is there any possibility to update TensorRT on the nano other than with JetPack, or is it recommended to wait for the release?

1 Like

Hi,

Sorry for the inconvenience.

The schedule for JetPack 4.6.1 is slightly delayed.
However, we don’t have the right to disclose any details here.

Please wait for our announcement for the release.
Thanks and sorry again for the inconvenience.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.