Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
I have searched previously the forums but I did not find the exact answer to my question. I am sorry if this has already been answered, I am totally new to Jetson devices and Deeptsream.
From what I understand with Deepstream 5.0 the Triton Inference Server is supported:
However, I cannot find the exact documentation for the usage for the Triton Inference server on Jetson devices. I do see that for Jetson Nanon/TX1/TX2 its not supported (below extract from the release notes):
“Triton Inference Server is not supported on Jetson Nano, TX1 and TX2 in this release”
However, for other Jetson devices, where can I find instructions on using the Triton inference server?