Installing Torch-TensorRT on Jetson Nano

Hello. I have a Jetson Nano and I’m trying to compile tensorrt to it.
I was reading this post on this forum.
I’m trying to compile it using the following instructions here.

I have checked out the v2.2.0 commit as the latest stable release. But I’m getting an error when trying to compile:

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
In file included from ./core/util/prelude.h:10,
from ./core/conversion/conversionctx/ConversionCtx.h:13,
from ./core/conversion/converters/converter_util.h:7,
from core/conversion/converters/impl/cast.cpp:2:
./core/util/trt_util.h: In function ‘std::ostream& nvinfer1::operator<<(std::ostream&, const nvinfer1::TensorFormat&)’:
./core/util/trt_util.h:39:34: error: ‘kHWC’ is not a member of ‘nvinfer1::TensorFormat’; did you mean ‘kHWC8’?
39 | case nvinfer1::TensorFormat::kHWC:
| ^~~~
| kHWC8
Target //:libtorchtrt failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 6744.866s, Critical Path: 6698.91s
INFO: 6 processes: 6 internal.
FAILED: Build did NOT complete successfully

I’ve tried Googling around, but didn’t find anything useful. The code has no changes from the release, so I suppose it should build with no errors? Looking at the docs it looks this kHWC is defined

This is my setup:
Package: nvidia-jetpack
Version: 4.5.1-b17

Linux: 4.9.201-tegra
Cuda version: 10.2

Any ideais on what could be wrong here?

Hi,

Since JetPack 4 only supports CUDA 10.2, please checkout a compatible branch first.

For example: v1.1.0

Dependencies

  • Bazel 4.2.1
  • Libtorch 1.11.0 (built with CUDA 11.3)
  • CUDA 11.3 (10.2 on Jetson)
  • cuDNN 8.2.1
  • TensorRT 8.2.4.2 (TensorRT 8.2.1 on Jetson)

Thanks.

Thanks! Just to make sure I understand, the problem is that Torch Tensor RT v2.2.0 needs CUDA 12.1 to work?
And in my JetPack 4.5.1 I only have the CUDA 10.2?
Is JetPack 4.6.4 the latest one available to Jetson Nano?

I’ve posted the reply before trying to compile it again, but still got the same error:

In file included from ./core/util/prelude.h:10,
from core/partitioning/shape_analysis.cpp:3:
./core/util/trt_util.h: In function ‘std::ostream& nvinfer1::operator<<(std::ostream&, const nvinfer1::TensorFormat&)’:
./core/util/trt_util.h:39:34: error: ‘kHWC’ is not a member of ‘nvinfer1::TensorFormat’; did you mean ‘kHWC8’?
39 | case nvinfer1::TensorFormat::kHWC:
| ^~~~
| kHWC8
Target //:libtorchtrt failed to build

I’ve checked out the commit 3cf58a209ad0f4e7d508e158c2b9b69d36b7e95d which has the tag v.1.1.0.

Is that “NvInfer.h” header part of CUDA?

Thanks!

Hi,

Yes, the latest software for Nano is JetPack 4.6.4.
JetPack 4.6.4 contains TensorRT 8.2.1.
JetPack 4.5.1 contains TensorRT 7.1.3.

The “nvinfer1::TensorFormat::kHWC” does exist in the TensorRT 8.2.1.
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-821/api/c_api/namespacenvinfer1.html#ac3e115b1a2b1e578e8221ef99d27cd45

But not present in TensorRT 7.1.3.

https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-713/api/c_api/namespacenvinfer1.html#ad26d48b3a534843e9990ab7f903d34a7

So please upgrade your device to JetPack 4.6.4 and try it again.

Thanks.

Did that! And it looks like I’m getting somewhere, but I’m running into yet another error:

Starting local Bazel server and connecting to it…
INFO: Analyzed target //:libtorchtrt (65 packages loaded, 8815 targets configured).
INFO: Found 1 target…
INFO: Deleting stale sandbox base /home/jose/.cache/bazel/_bazel_jose/65530bb867eebaee96246bec76a05199/sandbox
ERROR: /home/jose/PythonJetson/TensorRT/cpp/lib/BUILD:13:10: Linking cpp/lib/libtorchtrt_runtime.so failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc @bazel-out/aarch64-opt/bin/cpp/lib/libtorchtrt_runtime.so-2.params

Use --sandbox_debug to see verbose messages from the sandbox
/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-opt/bin/_solib_aarch64/_U@libtorch_S_S_Ctorch___Ulib/libtorch.so while searching for torch
/usr/bin/ld.gold: error: cannot find -ltorch
/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-opt/bin/_solib_aarch64/_U@libtorch_S_S_Ctorch___Ulib/libtorch_cuda.so while searching for torch_cuda
/usr/bin/ld.gold: error: cannot find -ltorch_cuda
/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-opt/bin/_solib_aarch64/_U@libtorch_S_S_Ctorch___Ulib/libtorch_cpu.so while searching for torch_cpu
/usr/bin/ld.gold: error: cannot find -ltorch_cpu
/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-opt/bin/_solib_aarch64/_U@libtorch_S_S_Ctorch___Ulib/libtorch_global_deps.so while searching for torch_global_deps
/usr/bin/ld.gold: error: cannot find -ltorch_global_deps
/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-opt/bin/_solib_aarch64/_U@libtorch_S_S_Cc10_Ucuda___Ulib/libc10_cuda.so while searching for c10_cuda
/usr/bin/ld.gold: error: cannot find -lc10_cuda
/usr/bin/ld.gold: warning: skipping incompatible bazel-out/aarch64-opt/bin/_solib_aarch64/_U@libtorch_S_S_Cc10___Ulib/libc10.so while searching for c10
/usr/bin/ld.gold: error: cannot find -lc10

It looks like now it is not finding some pytorch and cuda libraries.

Any ideas?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.