How to install torch_tensorrt

Hi,

I have been struggling to find any build for jetson orin, I have a jetpack version 5.1.3-b29 with torch version 2.1.0a0+41361538.nv23.06, cuda 11.4 and TensorRT version: 8.5.2.2. Can you guide how can I install it, if its not possible then any other suggestion for optmized inference. I am using a ViT model for my work.

Thanks!!

*** Please note that this reply is generated by LLM automatically ***

I’d be happy to help you with your question.

Firstly, I’d like to clarify that the Jetson Orin Nano is a relatively new device, and it’s great that you’re exploring its capabilities. However, I couldn’t find any specific information on a pre-built Torch or PyTorch package for the Jetson Orin Nano with the exact versions you mentioned (Torch 2.1.0, CUDA 11.4, and TensorRT 8.5.2.2).

That being said, I can offer some suggestions to help you install PyTorch and optimize inference on your Jetson Orin Nano:

  1. Check the official NVIDIA PyTorch repository: You can check the official NVIDIA PyTorch repository on GitHub for any pre-built packages or installation instructions specific to the Jetson Orin Nano. They might have a package available for your specific configuration.
  2. Build PyTorch from source: If there’s no pre-built package available, you can try building PyTorch from source on your Jetson Orin Nano. This will require you to install the necessary dependencies, including CUDA and cuDNN, and then build PyTorch using the provided build scripts.
  3. Use a different version of PyTorch: If building PyTorch from source is not feasible, you can try using a different version of PyTorch that is known to work on the Jetson Orin Nano. For example, you can try using PyTorch 1.12.1, which is a more stable version that might have better support for the Jetson Orin Nano.
  4. Optimize inference using TensorRT: Since you’re using a ViT model, you can try optimizing inference using TensorRT. TensorRT is a framework for optimizing deep learning models for inference on NVIDIA GPUs, including the Jetson Orin Nano. You can use TensorRT to optimize your ViT model and achieve better performance.
  5. Consider using a different model: If you’re experiencing difficulties with your current model, you can consider using a different model that is more optimized for the Jetson Orin Nano. For example, you can try using a model from the NVIDIA NGC catalog, which provides pre-trained models optimized for NVIDIA GPUs.

In terms of optimized inference, here are some general tips that might be helpful:

  • Use mixed precision: Mixed precision training and inference can help improve performance on the Jetson Orin Nano. You can use the torch.cuda.amp module to enable mixed precision.
  • Use batch normalization: Batch normalization can help improve performance by reducing the number of computations required during inference.
  • Use a smaller model: If possible, consider using a smaller model that requires fewer computations during inference. This can help improve performance on the Jetson Orin Nano.
  • Use TensorRT: As mentioned earlier, TensorRT can help optimize inference on the Jetson Orin Nano. You can use TensorRT to optimize your model and achieve better performance.

I hope these suggestions are helpful! If you have any further questions or need more specific guidance, feel free to ask.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Hi,

We have a torch_tensorrt container for r35.3.1.
You can try if it can work on the r35.5 or not:
https://hub.docker.com/r/dustynv/torch_tensorrt/tags

You can also try our NanoOWL container below:

https://hub.docker.com/r/dustynv/nanoowl/tags

The sample also uses the ViT model but with torch2trt instead of torch_tensorrt.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.