Hi Nvidia Team,
I am trying to Serve the Custom Model using Triton Server on my PC(having RTX 2060 GPU).
I am using nvcr.io/nvidia/tensorrtserver:19.10-py3 container for Serving.
While I am serving I am getting an error(found in the attached image below).
I used nvcr.io/nvidia/tensorrt:19.10-py2 Image to convert the Model from ONNX to Tensorrt.
May I know where I am doing wrong? And how to avoid this error?
Thanks in advance