Hi there. I am wondering if it’s possible to now run a Tensorflow-TensorRT inference server with Docker using a JetPack device, e.g. Xavier, Nano, etc.
There was a previous thread here, but the state of current support is still unclear to me, at least looking at the Triton GitHub README