Running x86_64 TensorRT closed source inference software on Jetson Nano

Greetings!

I am trying to run a TensorRT CUDA inference (onnx model) program compiled on x86_64 on Jetson Nano. The base idea comes to my mind is to use qemu-user to emulate the CPU part, and use cuda to run the inference.

What I have tried

I use chroot and qemu-user-static and can successfully run x86 compiled program on CPU.

Jetson Nano CUDA: 12.0
Container CUDA: 12.0
qemu-user: 6.12, compiled from source
docker command:

docker run -e DISPLAY=:0 --runtime=nvidia \
  --platform=linux/amd64 --name=run2test \
  -v ./qemu-6.2.0/build/qemu-x86_64:/usr/bin/qemu-x86_64-static \
  -v /tmp/.X11-unix:/tmp/.X11-unix --privileged -it test:latest

I tried to install x86 cuda in the container, but when the program runs through the cuda part, it prompts "CUDA failure 100: no CUDA-capable device is detected ":

/workspace/onnx/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:122 
bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const 
char*) [with ERRTYPE = cudaError; bool THRW = true] 
/workspace/onnx/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:116
 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const 
char*) [with ERRTYPE = cudaError; bool THRW = true] CUDA failure 100: no CUDA-
capable device is detected ; GPU=0 ; hostname=435555c1560b ; expr=cudaSetDevice(device_id_); 

Is there any trick or idea that can be helpful to this? The program is closed-source and thus cannot be cross-compiled. Deep thanks!

Hi,
For Jetson Nano, the latest release is Jetpack 4.6.4 and it uses CUDA 10.4. You can install the tool chain through SDKManager and try again. It looks like the issue is due to mismatch of CUDA version.

Hi,

We don’t support cross compile a Jetson TensorRT application on an x86 environment.
Could you share some information about your use case?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.