Download docker image nvcr.io/nvidia/tensorrt:24.08-py3 and create container。Run program in container(parameters: -s /root/yolov8n.trt10.wts /root/yolov8n.trt10.engine n ), report an error:
Install nvdla_compiler failed and which one should be downloaded and installed in the web site: Index? My Jetson is Orin NX,flash to JetPack6 。 Could you please guide the installation method?
Yes, I have tried it, remember it is fine, and the version 24.05-py3-igpu seems to be fine. Their tensorRT versions are all 8.
But our ultimate goal is to use the latest version of TensorRT, we have tested both versions of 10.3 and 8.6.1.6 on Windows, and the 10.3 version has about 4% -5% improvement.
Now we are using TensorRT 8.6.1.6 in Jetson Orin NX, and want to upgrade to version 10.3 to improve speed. Don’t know how to use the newer version of the TensorRT container, can you help to find a way?
We have verified that the TensorRT package can be installed on Jetson directly.
Please find the steps below (you can change to a newer version in a similar manner):
Thanks for the reply, but after brushing, installing Cuda12.6.1, and installing TensorRT10.3 in Ubuntu 22.04, I failed again (libnvdla_compiler.so => not found). Do you have this file? Can you directly send this file and its dependent file to us?