I have used TensorRT’s sample uff Mask RCNN example to convert our model from h5 and create an engine file. I have loaded this engine onto the Triton server and used the --strict-model-config=false flag to generate the config.pbtxt. The server is started successfully on the HTTP and gRPC ports. What I need help with is using the Triton C++ API to write the inference code. Not sure which client examples to follow. Any lead or direction of how to go about this or which particular example to follow would really help! Thank you.