How to use MMpose converted model with Triton server?

Hi,

I am trying to use MMpose in the Nvidia triton server but it does not support PyTorch model, it supports torchscript and ONNX, and a few others. So, I have converted MMpose mobilenetv2 model to ONNX using MMdeploy.

My questions is:

  1. Triton uses its own way to inference the model
    Example:
triton_client.infer(model_name,model_version=model_version,
inputs=input, outputs=output)

MMdeploy uses its own way to inference the model:
Example:

from mmdeploy_python import PoseDetector
detector = PoseDetector(
model_path=args.model_path, device_name=args.device_name, device_id=0)

How am I suppose the load the model using Triton way and not using PoseDetector function by mmdeploy?

I am stuck here from long time.

1 Like