When I try:
I get false. Does this mean that I am not using cuda at all? I am pretty sure that before I had True on this. I have installed Torch, Torchvision, Pycuda.
Is it possible to use TRT models without Cuda?
In jtop, I have the following:
print(torch.version) results in 2.0.1
print(torchvision.version) results in 0.15.2.
At the end of bashrc file, I have added:
Jetpack version is 5.1.2.
Device is Jetson Agx Orin 64gb Dev Kit.
Here I addition details that might help:
torch.cuda.device_count() return 0
pip3 show pycuda return:
Summary: Python wrapper for Nvidia Cuda
Author: Andreas Kloeckner
Hi @marinkovicivan, if you’ve installed other packages since, it’s possible that one of these installed another version of PyTorch from pip/PyPi, and these were not built with CUDA enabled. Please try re-installing the PyTorch wheel you originally installed from https://developer.download.nvidia.com/compute/redist/jp/ and check
I have just uninstalled torch and torchvision and installed the latest torch version and when I try
torch.cuda.is_available() it returns True but after I install torchvision 0.16.0
torch.cuda.is_available() return False.
If you want a functioning Cuda enabled Torch and Torchvision in Jetson, you follow use this installation procedure: PyTorch for Jetson
Uninstall what you had installed before and reinstall it using the steps above.
I don’t think you can use TRT at all without a cuda compatible torch.
Yes, you need to build torchvision from source like you mentioned, otherwise installing torchvision from pip will give you non-CUDA versions.
If you want to use torch2trt, then you need torch. torch is not required though to use tensorrt, unless another package using it requires it.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.