TensorRT Inference Server Windows support

I have the inference server up and running with a TensorRT model I have created from an existing Caffe model. I have the inference client up and running as well from a Linux environment and can do inference. What I would like to do is run a windows client to query the inference server but am finding the code written specifically for use in a Linux environment.

I am hoping to get some pointers on how to supplement the inference server api code or “strip” it from the python client in order to call the inference server from windows python.

Hello,

The NVIDIA® Inference Server provides inference via http, which is OS agnostic (client-wise). Likely there are python incompatibilities.

Can you upload examples of errors you are seeing?

R:\TensorRT\clients> python grpc_image_client.py -u 10.4.205.117:8001 -m trt_af
ilter -s INCEPTION test.jpg
Traceback (most recent call last):
File “grpc_image_client.py”, line 37, in
from inference_server.api import api_pb2
ImportError: No module named inference_server.api

The .whl file provided in the Inference server is for linux install.

hello,

There is an example image_client application (both C++ and Python versions) in the https://github.com/NVIDIA/dl-inference-server that shows how to talk to the inference server using the Inference Server Client Library (the source for that is also included in this repo).

We haven’t tried building those on Windows but it is likely not too difficult to do that (we’d like to hear your experience if you try).

I’ve checked out the make file and found it to be specific for a linux install. I’ve played around with trying to get it to be made on a windows environment but am not having any luck. I’ve also posted to the github for more assistance.

When this was announced in March at GTC, Windows support was indicated, but we are well into September and am not seeing any real windows options to work with.

Hello,

We are sorry that the windows client is not supported yet. We are always reviewing feature requests from the community and plan internally.

Sorry again that we cannot share more information about further release here.
Please pay attention to our announcement for the information.

we now have Windows support for building the client libraries.
See https://github.com/NVIDIA/tensorrt-inference-server/blob/master/docs/client.rst#windows-10

There are no plans to port the server to Windows.