Setup Triton Inference Server on a Windows 2019 server with Tesla GPU + inference using python

We need to setup Nvidia Triton Inference Server on a Windows 2019 server and utilize the Tesla GPU for inferencing the client applications using python.

For the ways that we came across we found that we need to it with docker and to use docker in Windows server as per my knowledge we need to do it with WSL, but, We don’t want to setup a WSL on the Windows system.

Can someone please share the steps for the same.

Is there a way to setup docker without WSL? if yes kindly do share the reference for triton inference server.