It was converted normally in ubuntu 20.04 with rtx3090, but an error occurs when converting the clrnet onnx file to tensorrt using trtexec in jetson. Is this a version issue?
Environment
jetson orin r35.1 linux
rtx3090 : x86-64 ubuntu 20.04 docker TensorRT Version: jetson : 8.5.0 rtx3090 : 8.5.1 Nvidia Driver Version: rtx3090 : 470.103.01 CUDA Version: jetson : cuda-11.8, rtx3090 : cuda-11.3 CUDNN Version: jetson : 8.5.0, rtx3090 : 8.6.0 Operating System + Version: ubuntu 20.04.5 Python Version (if applicable): 3.8 PyTorch Version (if applicable): jetson : torch1.12.0+cu114( builded from source), rtx3090 : 1.12.0+cu113 ( install torch wheel) Baremetal or Container (if container which image + tag):
jetson : nvcr.io/nvidia/l4t-jetpack:r35.1.0
rtx3090 : nvidia/cuda:11.4.2-cudnn8-runtime-ubuntu20.04
As I left the test environment in the first post, when running on NGC Conatiner on x86, it comes out normally without errors. The error comes from jetson orin.
But on jetson orin the last tensorrt version is 8.5.0.
Which l4t image supports 8.5.1 in NGC Container?
We don’t have GA TensorRT 8.5 package for Orin right now.
If you just want to verify the model with TensorRT, below is a DP JetPack release that can be used.
(Native setup, the container for TensorRT 8.5 is not available as well.)
For production, please wait for our next JetPack 5.1 release to get the TensorRT 8.5.
I already tested with 22.08 Jetson CUDA-X AI Developer Preview.
The preview had the same problem because tensorrt was 8.5.0.
Will the next jetpack release include tensorrt 8.5.1?
Or How can I get the 8.5.1 version of tensorrt for jetson orin?