Conducting TensorRT Inference on Jetson Nano with CUDA Space Constraints

I have converted my .h5 model into TensorRT but my Jetson Nano has limited space, which prohibits the installation of the CUDA library. Since TensorRT relies on CUDA for acceleration, I’m uncertain about the feasibility of performing inference on my Jetson Nano.
Could someone advise me on potential solutions or workarounds to conduct inference with TensorRT on Jetson Nano given these space constraints? Any insights, recommendations, or alternative approaches would be greatly appreciated.
Thanks!

Hello,

Welcome to the NVIDIA Developer forums! Your topic will be best served in the Jetson category.

I will move this post over for visibility.

Cheers,
Tom

Hi,

The most compact way is reflashing the system and installing TensorRT via:

$ sudo apt-get install nvidia-tensorrt

This will only install the package and its dependencies.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.