Cuda error while running inference

Hello,

I have a problem with CUDA, because I use DeepSpeech for speech recognition. I use my own Ubuntu 20.04.2 LTS (GNU/Linux 4.9.140-tegra aarch64) image on my Jetson nano 2GB
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/flashing.html#wwpID0E0TJ0HA

When I launch the inference engine I get these messages:

2021-02-11 15:35:18.963581: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-02-11 15:35:18.998228: E tensorflow/stream_executor/cuda/cuda_driver.cc:314] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-02-11 15:35:18.998338: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (localhost): /proc/driver/nvidia/version does not exist

I would like to be able to use the GPU of the card but I don’t understand the error, any help on this error is welcome!

I think your custom image is missing CUDA drivers and/or some other essentials…