I can access the gpu from Docker but I can't use it

Hello, I think I did everything to run the GPU in Docker. But even though it sees the GPU, it cannot use the GPU at all and gives an error.

Jetpack 6

I use this as image: PyTorch Release 23.08 - NVIDIA Docs nvcr.io/nvidia/tensorrt:23.08-py3

torch == 2.3.0

I’m just running a simple yolo inference code

device used = CUDA

Traceback (most recent call last):
File “/app/test.py”, line 27, in
results = model(frame)
File “/root/miniconda3/envs/test_env/lib/python3.10/site-packages/ultralytics/engine/model.py”, line 182, in call
return self.predict(source, stream, **kwargs)
File “/root/miniconda3/envs/test_env/lib/python3.10/site-packages/ultralytics/engine/model.py”, line 553, in predict
self.predictor.setup_model(model=self.model, verbose=is_cli)
File “/root/miniconda3/envs/test_env/lib/python3.10/site-packages/ultralytics/engine/predictor.py”, line 310, in setup_model
self.model = AutoBackend(
File "/root/miniconda3/envs/test_env/lib/python3.10/site-packages/torch/utils/contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File “/root/miniconda3/envs/test_env/lib/python3.10/site-packages/ultralytics/nn/autobackend.py”, line 152, in init
model = model.fuse(verbose=verbose)
File “/root/miniconda3/envs/test_env/lib/python3.10/site-packages/ultralytics/nn/tasks.py”, line 206, in fuse
m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
File “/root/miniconda3/envs/test_env/lib/python3.10/site-packages/ultralytics/utils/torch_utils.py”, line 262, in fuse_conv_and_bn
fusedconv.weight.copy
(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle)