Description
I have a Jetson Orin Nano and a Jetson TX2. I intend to run the Triton server in a Docker container on the Jetsons. The Triton Server Docker container has tags with iGPU, which work on Orin but not on TX2. How can I build a Docker container that also runs on the TX2?
Relevant Files
docker run --runtime nvidia --rm -it -v /path/to/your/models:/models -p 8000:8000 -p 8001:8001 -p 8002:8002 nvcr.io/nvidia/tritonserver:25.08-py3-igpu tritonserver --model-repository=/models
=============================
== Triton Inference Server ==
=============================
NVIDIA Release 25.08 (build 202022691)
Triton Server Version 2.60.0
Copyright (c) 2018-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
GOVERNING TERMS: The software and materials are governed by the NVIDIA Software License Agreement
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/)
and the Product-Specific Terms for NVIDIA AI Products
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/).
WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available.
Use the NVIDIA Container Toolkit to start this container with GPU support; see
https://docs.nvidia.com/datacenter/cloud-native/ .
terminate called after throwing an instance of 'std::system_error'
what(): Operation not permitted
Steps To Reproduce
Try run TritonServer on a TX2