hi,
i am trying to install tensorRT in jetson orin nano developer kit,but i always get error telling subprocess-exited-with-error.
May I know the answer for this.
A clear and concise description of the bug or issue.
Environment
TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered
Hi @jsanjana127 I’m moving this to the Jetson Orin Nano forum, where the team there should be able to help you. Can you share more logs from the install? Thanks!
Sophie
Hi,
Could you share the command you use and the corresponding error log with us?
Thanks.
hi
i have followed the below given github link.
https://github.com/NVIDIA/TensorRT-LLM/blob/v0.12.0-jetson/README4Jetson.md
the error which i get is
CMake Error at CMakeLists.txt:186 (message):
No CUDA compiler found
– Configuring incomplete, errors occurred!
See also “/home/san-jetson-orin/TensorRT-LLM/cpp/build/CMakeFiles/CMakeOutput.log”.
See also “/home/san-jetson-orin/TensorRT-LLM/cpp/build/CMakeFiles/CMakeError.log”.
Traceback (most recent call last):
File “/home/san-jetson-orin/TensorRT-LLM/scripts/build_wheel.py”, line 412, in
main(**vars(args))
File “/home/san-jetson-orin/TensorRT-LLM/scripts/build_wheel.py”, line 201, in main
build_run(
File “/usr/lib/python3.10/subprocess.py”, line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command ‘cmake -DCMAKE_BUILD_TYPE=“Release” -DBUILD_PYT=“ON” -DBUILD_PYBIND=“ON” -DNVTX_DISABLE=“ON” -DBUILD_MICRO_BENCHMARKS=OFF “-DCMAKE_CUDA_ARCHITECTURES=87” “-DENABLE_MULTI_DEVICE=0” -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache -S “/home/san-jetson-orin/TensorRT-LLM/cpp”’ returned non-zero exit status 1.
Defaulting to user installation because normal site-packages is not writeable
WARNING: Requirement 'build/tensorrt_llm-.whl’ looks like a filename, but the file does not exist
ERROR: tensorrt_llm-.whl is not a valid wheel filename.
Hi,
Please try $ nvcc --version
to verify the CUDA compiler.
If it doesn’t work, please run the below command and try it again.
$ export PATH=/usr/local/cuda-12.6/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/cuda-12.6/lib64:$LD_LIBRARY_PATH
Thanks.