TensorFlow 2 support

Hello,

Is there any ETA for add supporting of TensorFlow 2 to TRTIS?
As far as I can see Nvidia container with TensorFlow 2 was released this month, but TRTIS has continued to use the previous version of TensorFlow.

Thanks!

Hi, I wanted to revive this thread because I am also interested in adding TensorFlow 2 support to Triton.

I am trying to deploy a TensorFlow graphdef model on Triton 2.0.0. I have noticed that inference times in my local dev environment (not on Triton) are much slower when the model is exported using TensorFlow 1.15.2 instead of 2.2.0. When I tested both exports on Triton, I noticed that the TF2.2.0 export loses its performance advantage and is equally as slow as the TF1.15.2 export. My assumption is that TF2 is responsible for these inference time improvements and that allowing Triton to serve TF2 models would result in better inference times.

I tried building a custom image from the base Dockerfile with the TensorFlow image arg updated.

Here is what I set the arg to (replaced tf1 with tf2 after seeing it was an option in the tf docs):

ARG TENSORFLOW_IMAGE=nvcr.io/nvidia/tensorflow:20.06-tf2-py3

But it looks like the TF2 TRTIS lib doesn’t exist because I get this copy failure:

Step 39/124 : COPY --from=trtserver_tf      /usr/local/lib/tensorflow/libtensorflow_trtis.so.1      /opt/tritonserver/lib/tensorflow/
COPY failed: stat /var/lib/docker/aufs/mnt/92c0ea40c2eebfb2675728f6de9f6123c63d1dc1a93fb6455e28905720d95d53/usr/local/lib/tensorflow/libtensorflow_trtis.so.1: no such file or directory

Is it possible to build Triton so that it can serve models with TensorFlow 2?

Thanks

answered here:
https://github.com/NVIDIA/triton-inference-server/issues/1804#issuecomment-659679254