Op type not registered 'CaseFoldUTF8' in binary

Description

I am in the development phase of running Deep Learning Model on Triton Inference Server

Environment

TensorRT Version:
GPU Type: GPU 0: Tesla V100-SXM2-16GB

Nvidia Driver Version:
Triton server_version : 2.15.0
Triton Image: โ€œ21.10โ€
CUDA Version:
CUDNN Version:
Operating System + Version: Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
Python Version (if applicable):
TensorFlow Version (if applicable):2.9.1
PyTorch Version (if applicable): python3.8
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

  1. Launch a container. And add the following environment variables for TensorFlow Backend and a Few Custom Ops(tokenizers)
export LD_LIBRARY_PATH=/opt/tritonserver/backends/tensorflow2:$LD_LIBRARY_PATH
                 && export LD_PRELOAD={{ .Values.triton.pre_load_binaries }}

where {{ .Values.triton.pre_load_binaries }} equals
"'/triton/lib/_sentencepiece_tokenizer.so /triton/lib/_normalize_ops.so /triton/lib/_regex_split_ops.so /triton/lib/_wordpiece_tokenizer.so'",

and /triton/lib/ is a volume that contains these shared libs.

  1. I have set the follwing permissions for the /triton/lib directory
I have no name!@triton-569f794545-hlv2c:/opt/tritonserver$ ls -l /triton/lib/*
-rwsr-sr-x 1 7447 1337 4266352 Jul 12 18:24 /triton/lib/_normalize_ops.so
-rwsr-sr-x 1 7447 1337  786920 Jul 12 18:24 /triton/lib/_regex_split_ops.so
-rwsr-sr-x 1 7447 1337 4373688 Jul 12 18:24 /triton/lib/_sentencepiece_tokenizer.so
-rwsr-sr-x 1 7447 1337  108744 Jul 12 18:24 /triton/lib/_wordpiece_tokenizer.so
  1. When I start the container, I run into
ERROR: ld.so: object '/triton/lib/_sentencepiece_tokenizer.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.

ERROR: ld.so: object '/triton/lib/_normalize_ops.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.

ERROR: ld.so: object '/triton/lib/_regex_split_ops.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.

ERROR: ld.so: object '/triton/lib/_wordpiece_tokenizer.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
  1. And eventually, while issuing an inference call to the model, I see this in the triton-inference-server logs
2022-07-12 20:25:18.577194: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:828] function_optimizer failed: Not found: Op type not registered 'CaseFoldUTF8' in binary running on triton-569f794545-hlv2c. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

2022-07-12 20:25:18.664412: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:828] function_optimizer failed: Not found: Op type not registered 'CaseFoldUTF8' in binary running on triton-569f794545-hlv2c. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

2022-07-12 20:25:18.756925: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:828] function_optimizer failed: Not found: Op type not registered 'CaseFoldUTF8' in binary running on triton-569f794545-hlv2c. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

2022-07-12 20:25:18.826908: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:828] function_optimizer failed: Not found: Op type not registered 'CaseFoldUTF8' in binary running on triton-569f794545-hlv2c. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered
1 Like

I have the same issue, as I use model that needs tensorflow-text to be installed. Still cannot install it

Hi,
UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.

Thanks!

I am not using TensorRT. I am using tensorflow model

Hi,

We recommend you to please reach out to Issues ยท triton-inference-server/server ยท GitHub to get better help.

Thank you.