Install TensorRT 8.6.1.6 on Jetpack 6.0 DP

Hi all

My project requires TensorRT 8.6.1.6, but Jetpack 6.0 DP pre-build with version 8.6.2.3 . Could Jetpack 6.0 DP support TensorRT 8.6.1.6?

If yes, how can I install it?

Thanks.

Hi,

TensorRT 8.6.1.6 is a desktop release and is not available for Jetson.

But v8.6.1.6 and v8.6.2.3 are very close.
Do you have any compatibility issues?

Thanks.

@AastaLLL
Hi AastaLLL

On Jetson, I converter model and return error message :Device memory is insufficient to use tactic.

so I converter model by TensorRT 8.6.1.6 with cuda 11.7, OS is Windows

then I import model on Jetson and show error message:

[TRT] [E] 6: The engine plan file is not compatible with this version of TensorRT, expecting library version 8.6.2.3 got 8.6.1.6, please rebuild.

All of my projects used TensorRT 8.6.1.6 with cuda 11.7, OS is Windows,
and now I want to import Jetson.
If the two are not compatible, I will be quite troubled.

Thanks.

Hi,

The engine built on the desktop cannot be used on Jetson.
TensorRT engine has dependencies on both software version and hardware architecture.

Could you share the complete log you met from the Jetson with us?
Thanks.

@AastaLLL

Hi AastaLLL:

My training data image size : 2448*2048.

I converter model and return error message:

Traceback (most recent call last):
File “/home/orinnx/AI_Python/TranModelToTensorRT.py”, line 240, in
ai_model_obj.Convert_RT(rt_model_path=rt_model_path, max_trt_batch_size=pred ict_batch_size,device=device )
File “/home/orinnx/AI_Python/SegSMP.py”, line 1671, in Convert_RT
self.model_trt = torch2trt(self.model, [im], max_batch_size=max_trt_batch_si ze,
File “/home/orinnx/AI_Python/torch2trt/torch2trt.py”, line 695, in torch2trt
outputs = module(*inputs)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/segmentation_models_pytor ch/base/model.py”, line 30, in forward
decoder_output = self.decoder(*features)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/segmentation_models_pytor ch/decoders/unet/decoder.py”, line 119, in forward
x = decoder_block(x, skip)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/segmentation_models_pytor ch/decoders/unet/decoder.py”, line 40, in forward
x = self.conv1(x)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/containe r.py”, line 215, in forward
input = module(input)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/module.p y”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/conv.py” , line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File “/home/orinnx/.local/lib/python3.10/site-packages/torch/nn/modules/conv.py” , line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: NVML_SUCCESS == r INTERNAL ASSERT FAILED at “/tmp/pytorch/c10/cuda /CUDACachingAllocator.cpp”:1154, please report a bug to PyTorch.

My another training data image size : 1056*1056

I converter model and return error message:
[TRT] [W] Tactic Device request: 823MB Available: 662MB. Device memory is insufficient to use tactic.

Thanks.

Hi,

[TRT] [W] Tactic Device request: 823MB Available: 662MB. Device memory is insufficient to use tactic.

This is a warning message rather than an error.
Could you share the complete TensorRT log with us?

Thanks.

Hi AastaLLL

I converter model by torch2trt,

and torch2trt is download from GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter.

I cannot find the TensorRT log.

Where is the log file outputted?

Thanks.

Hi,

Sorry for the late update.

TensorRT log outputs to the console directly.
Pleas copy it and share it with us.

Thanks.

Hi, I am having the same issue. I build my tensorRT engine using nvcr.io/nvidia/tensorflow:24.01-tf2-py3
on the Jetson. This image uses TensorRT 8.6.1.6 on the Jetson. Jetpack 6.0 DP uses 8.6.2.3 and throws an error “The engine plan file is not compatible with this version of TensorRT, expecting library version 8.6.2.3 got 8.6.1.6, please rebuilt”.

Unfortunately I cannot find an image with tensorflow-tensorRT support that uses tensorRT 8.6.2.3

Hi,

TensorRT engine has dependencies on both software and hardware.
Even finding a compatible TensorRT version, you will still need to rebuild it for the different GPU architecture.

So please directly convert the model into a TensorRT engine on Jetson.
You can find a TensorFlow package for Jetson below:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.