Deepstream-app segmentation fault with Triton Tensorflow backend

Greetings. I have really strange issue with my Keras model converted to Tensorflow SavedModel format. This model runs correctly with Tensorflow Serving and Triton Inference Server. I’ve downloaded Docker image of DeepStream (nvcr.io/nvidia/deepstream:5.1-21.02-triton) and tried to load my model through deepstream-app, but in crashes after loading libcudnn:

I0811 12:19:37.619488 12589 tensorflow.cc:2099] model mtcnn, instance mtcnn, executing 1 requests
I0811 12:19:37.619527 12589 tensorflow.cc:1388] TRITONBACKEND_ModelExecute: Running mtcnn with 1 requests
I0811 12:19:37.624165 12589 tensorflow.cc:1616] TRITONBACKEND_ModelExecute: input 'input' is GPU tensor: false
2021-08-11 12:19:38.531670: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-08-11 12:19:39.156309: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
I0811 12:19:41.841704 12589 infer_response.cc:165] add response output: output: o_net_layer_1, type: FP32, shape: [1,5]
Segmentation fault (core dumped)

Same model loaded with tritonserver --model-repository=models in same Docker container works correctly:

I0811 12:22:12.574130 12908 grpc_server.cc:3979] Started GRPCInferenceService at 0.0.0.0:8001
I0811 12:22:12.574764 12908 http_server.cc:2717] Started HTTPService at 0.0.0.0:8000
I0811 12:22:12.617878 12908 http_server.cc:2736] Started Metrics Service at 0.0.0.0:8002

I started to remove layers and operations to locate operation where this issue produces, and I’ve found it:

bb = tf.where(imap >= 0.6)

I’ve tried to change this operation to tf.boolean_mask, but result is the same. Then I’ve changed it to imap <= 0.6, and it has surprisingly worked! Could you please help me to understand this strange behavior and how is this issue can be resolved?

• Hardware Platform (Jetson / GPU) NVIDIA Tesla T4
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) -
• TensorRT Version -
• NVIDIA GPU Driver Version (valid for GPU only) 460.73.01
• Issue Type( questions, new requirements, bugs) question, bug

" in same Docker container", do you mean [nvcr.io/nvidia/deepstream:5.1-21.02-triton](http://nvcr.io/nvidia/deepstream:5.1-21.02-triton ?

Accoeding to tf.where  |  TensorFlow v2.14.0 , you should use " If x and y are not provided (both are None):" case, for this case, is “tf.where(imap >= 0.6)” or “tf.where(imap < 0.6)” right? Shouldn’t be bb=tf.where([imap >= 0.6]) ?

Yes, I’ve used tritonserver binary in that container.

It just adds extra zero-value column to the matrix. For example:

tf.where(imap >= 0.6)
#<tf.Tensor: shape=(144, 2), dtype=int64, numpy=
#array([[ 13,  21],
#       [ 13,  22],
#       [ 14,  21],
#       [ 14,  22],
#       [ 28, 186],
#       ...
######################################################
tf.where([imap >= 0.6])
#<tf.Tensor: shape=(144, 3), dtype=int64, numpy=
#array([[  0,  13,  21],
#       [  0,  13,  22],
#       [  0,  14,  21],
#       [  0,  14,  22],
#       [  0,  28, 186],
#       ...

OK, I’ve figured it out. It seems segmentation fault appears not on specific operations, but in case when layer returns something with zero shape, e.g. tf.constant([0, 5]), so I’ve just added handling of these cases and it’s working.