Deepstream nvinfer-server with https endpoint

While using Gst-nvinferserver in deepstream pipeline, it uses c-api to communicate with triton inference server. I want to deploy a triton server with multiple models in T4 instance, and deepstream-app to communicate with that server in same T4 instance using https, how can I communicate from deepstream to triton server?

If you would still like a response, please consider re-posting your question on: Triton Inference Server · GitHub , the NVIDIA and other teams will be able to help you there.
Sorry for the inconvenience and thanks for your patience.