Support for Triton Inference Server on Jetson NX


We are trying to run Triton server with a detection use case on Jetson NX. Is there any step by step instructions available for using Triton inference server on Jetson NX?

We have followed this LINK already which is 2 years old. This setup was not able to find in NX.

We would really appreciate if there is a latest triton server docker container available to start with for Jetson NX based application implementations.

Any leads would be really appreciated.



It’s recommended to use Triton together with deepstream to get an optimal pipeline on Jetson.
Some samples can be found in the below repository:


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.