Jetson Xavier NX getting out of memory when running a docker container

I am using the Jetson Nano Xavier NX with the following configuration Package: nvidia-jetpack

Version: 5.1.3-b29

Architecture: arm64

Maintainer: NVIDIA Corporation

Installed-Size: 194

Depends: nvidia-jetpack-runtime (= 5.1.3-b29), nvidia-jetpack-dev (= 5.1.3-b29)

and cuda 11.4
when I run the docker container I get this error for frame generation in video stream
Error in detect_objects: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle)
and if I run the application outside the container it works perfectly without any issue.
I tried reducing the memory usage but still give the same error.

Hi,

If you want to increase the memory amount accessed within the docker.
Please try the --memory=500M --memory-swap=8G flag when launching the container.

Thanks

I have tried increasing the memory while launching the container as well but getting the same error even instead of --memory=500M and --memory-swap=8G I give it to the max and --shm-size=2G but still got the same

Hi,

Which container do you use?

Please note that you will need to use l4t based container to allow GPU access within docker.
For example:

Thanks.

currently I am using FROM nvidia/cuda:11.4.3-cudnn8-runtime-ubuntu20.04 this image

Hi,

Please use an L4T-based image since the Jetson has its own library.
Based on your use case, you can try nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.