Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
2.1.0
other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
Issue Description
I want a Tensorrt docker container environment in Drive Orin to run our DNN models. For that i used the following Tensorrt container available on NGC website : nvcr.io/nvidia/tensorrt:21.12-py3
The reason to use this specific version is that from this version, it supports linux arm64 architecture.
The container gets pulled and executed successfully. But when i try to run its sample cpp programs in order to check the container environment, its showing the below mentioned error for every samples.
Error message depicts cuda installation error, but as per the container image cuda is already part of the container environment.
PS: I have checked the same container samples execution in x86 host environment for Drive Orin and it gets executed successfully without any error.
Requesting your help in resolving this error.
Error String
[10/09/2024-11:47:11] [E] [TRT] 6: [cudaDeviceProfile.cpp::isCudaInstalledCorrectly::119] Error Code 6: Internal Error (CUDA initialization failure with error 999. Please check your CUDA installation: CUDA Installation Guide for Linux)
&&&& FAILED TensorRT.sample_onnx_mnist [TensorRT v8201] # ./sample_onnx_mnist
Logs
root@1b9425014e73:/workspace/tensorrt/bin# ./sample_onnx_mnist
&&&& RUNNING TensorRT.sample_onnx_mnist [TensorRT v8201] # ./sample_onnx_mnist
[10/09/2024-11:47:11] [I] Building and running a GPU inference engine for Onnx MNIST
[10/09/2024-11:47:11] [W] [TRT] Unable to determine GPU memory usage
[10/09/2024-11:47:11] [W] [TRT] Unable to determine GPU memory usage
[10/09/2024-11:47:11] [I] [TRT] [MemUsageChange] Init CUDA: CPU +7, GPU +0, now: CPU 17, GPU 0 (MiB)
[10/09/2024-11:47:11] [E] [TRT] 6: [cudaDeviceProfile.cpp::isCudaInstalledCorrectly::119] Error Code 6: Internal Error (CUDA initialization failure with error 999. Please check your CUDA installation: CUDA Installation Guide for Linux)
&&&& FAILED TensorRT.sample_onnx_mnist [TensorRT v8201] # ./sample_onnx_mnist