How to deploy and run Mistral-7B-Instruct-v0.2 model using Triton Inference Server on AWS EC2 instance

I have downloaded Mistral-7B-Instruct-v0.2 model from huggingface and want to convert that model to Triton Inference Server supported framework and Run the model using Triton Inference server. I need support doc which provides steps to deploy and also infrastructure details.