An error occurred while deploying tritonserver using xavier nx

triton server version :2.33.0
xavier version :jetson xavier nx developer kit
p-number: p3668-0000
soc :tegra194
cuda arch bin: 7.2
L4T:35.3.1 jetpack: 5.1.1
Libraries
CUDA: 11.4.315
cuDNN: 8.6.0.166 TensorRT:8.5.2.2
OpenCV: 4.5.4 with CUDA:YES
The command executed is :
docker run --gpus=1 --rm --net=host -v ${PWD}/model_repository:/models nvcr.io/nvidia/tritonserver:23.04-py3

=============================
== Triton Inference Server ==

NVIDIA Release 23.04 (build 58408266)
Triton Server Version 2.33.0

Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:

ERROR: The NVIDIA Driver is present, but CUDA failed to initialize. GPU functionality will not be available.
[[ Unable to initialize CUDA driver (error ???) ]]

NVIDIA Tegra driver detected.

Hi,

You will need the image with l4t flag to run on the Jetson.

There is a an image that has Triton preinstalled in the Deepstream container (deepstream-l4t:6.2-triton):

Thanks.

Hello, can you explain the specific reason for the error?

If I don’t solve this problem by pulling the image, can I solve it by installing DeepStream-l4t?

Hi,

The error indicates that you need a dGPU driver (discrete GPU) to run the container.

ERROR: The NVIDIA Driver is present, but CUDA failed to initialize. GPU functionality will not be available.

However, Jetson is an embedded system equipped with iGPU(integrated GPU).
For iGPU, the driver is included in the OS and we call it l4t (linux4tegra).

That’s why we think a container with an l4t tag should fix your problem.

Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.