On the Kubeflow Website there is no longer any reference to TensorRT or Triton.
How can I deploy a Inference server that uses Nvidia Triton/TensorRT in Kubeflow.
Is there any update documentation on Nvidia, maybe internally ?
I was trying to deploy RIVA with Kubeflow.
Some ppl did RAPIDS with kubeflow, but thats a different thing.