Building custom TensorFlow ops

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.1 and 6.1.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
520.61.05
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’m trying to build a custom tensorflow2 op for deeplab2 (deeplab2/installation.md at main · google-research/deeplab2 · GitHub) to use inside DeepStream Triton docker image. I’ve tried building the op using many different versions of nvcr.io/nvidia/tensorflow:22.*-tf2-py3 to avoid having to install tensorflow python inside the triton container. However I keep running in to the issue where the DeepStream’s tritonserver seems to be using protobuf 3.8.0, but all tensorflow docker images seem to be using 3.9:

[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/stubs/common.cc:86] This program was compiled against version 3.8.0 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.9.2).  Contact the program author for an update.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "../../../build/src/utils/nvdsinferserver/pb/nvdsinferserver_common.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
  what():  This program was compiled against version 3.8.0 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.9.2).  Contact the program author for an update.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "../../../build/src/utils/nvdsinferserver/pb/nvdsinferserver_common.pb.cc".)

I’ve tried both DeepStream Triton 6.1 and 6.1.1. Any suggestions of which container I should be using to build the TF2 op?

triton supports TensorFlow 2 inference backend, are you customizing tensorflow2 backend? here are some references: GitHub - triton-inference-server/backend: Common source, scripts and utilities for creating Triton backends.

Hi @fanzh, yes it supports TF2 and it runs fine. We’re just trying to find ways to speed up the TF2 inference by compiling this custom TRT ops.

Which step or command will cause this error?
did you try to build in DeepStream Triton docker directly?

No, I’m using triton server from the deepstream triton container. The error happens when you try to load the model op when loading the model according to triton’s doc:

  1. did you mean that build tensorflow2 op successfully on tensorflow container, start tritonserver failed in deepstream triton docker after copying tensorflow2 op to triton container?
  2. sorry for typo, did you try to build in DeepStream Triton container directly? can tensorflow2 build successfully using protobuf 3.8.0?

#1, yes because they use incompatible protobuf versions.
#2, I have not tried to build the deepstream triton container myself. I use the one provided via NGC. I have not yet tried to build tensorflow2 myself as it’s a fairly involved process.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

there is another solution, you can build tritonserver yourself, here is the link server/build.md at main · triton-inference-server/server · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.