How to run a custom yolov5 model in triton inference server

I have trained a yolov5 custom model for 38 classes. Then the conversion to onnx is done. Now i am trying to deploy it in the triton inference server in a g4 instance and onnx model is having 4 dimensions due to which i am unable to do the inference from the client. Could anybody let me know which way is the best and the sample config file related to it if provided would be of great help.
Thanks in advance