Rest-api/curl command for posting images to Triton Inference server

Hi, we are using Triton Inference server on Jetsons Xavier NX. We have been able to load Tensorflow based object detection models on to the triton inference.

  1. Is there a curl command that we can use to post an image to the model and get a response back?
  2. What is the http rest api endpoint to infer the image



Hello Krishna,

You can refer to the python client given in the TRTIS client repository [URL]

This is how you can send an image to the model and get the response back.

  1. The HTTP API endpoint is IP:8000.

Hi @drkrishna ,

You can use the following commands outside server container for verifying the server setup:

curl -v localhost:8000/v2

curl -v localhost:8000/v2/health/ready

curl localhost:8000/v2/models/<model_name>/config

More information: GitHub - triton-inference-server/server: The Triton Inference Server provides an optimized cloud and edge inferencing solution.

Thanks @mmakwana @bgiddwani !