UffParser: Validator error: dense2/unstack: Unsupported operation _Unpack

I have this trained network and I want to use it in deepstream 4.0:

Model: "model_20"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to 
==================================================================================================
the_input (InputLayer) (None, 136, 68, 1) 0 
__________________________________________________________________________________________________
conv1 (Conv2D) (None, 136, 68, 16) 160 the_input[0][0] 
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 136, 68, 16) 2320 conv1[0][0] 
__________________________________________________________________________________________________
max1 (MaxPooling2D) (None, 68, 34, 16) 0 conv2d_1[0][0] 
__________________________________________________________________________________________________
conv2 (Conv2D) (None, 68, 34, 16) 2320 max1[0][0] 
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 68, 34, 16) 2320 conv2[0][0] 
__________________________________________________________________________________________________
max2 (MaxPooling2D) (None, 34, 17, 16) 0 conv2d_2[0][0] 
__________________________________________________________________________________________________
reshape (Reshape) (None, 34, 272) 0 max2[0][0] 
__________________________________________________________________________________________________
dense1 (Dense) (None, 34, 32) 8736 reshape[0][0] 
__________________________________________________________________________________________________
gru1 (CuDNNGRU) (None, 34, 512) 838656 dense1[0][0] 
__________________________________________________________________________________________________
gru1_b (CuDNNGRU) (None, 34, 512) 838656 dense1[0][0] 
__________________________________________________________________________________________________
add_1 (Add) (None, 34, 512) 0 gru1[0][0] 
gru1_b[0][0] 
__________________________________________________________________________________________________
gru2 (CuDNNGRU) (None, 34, 512) 1575936 add_1[0][0] 
__________________________________________________________________________________________________
gru2_b (CuDNNGRU) (None, 34, 512) 1575936 add_1[0][0] 
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 34, 1024) 0 gru2[0][0] 
gru2_b[0][0] 
__________________________________________________________________________________________________
dense2 (Dense) (None, 34, 36) 36900 concatenate_1[0][0] 
__________________________________________________________________________________________________
softmax (Activation) (None, 34, 36) 0 dense2[0][0] 
==================================================================================================

I’ve converted it from tensorflow frozen model to UFF format:

import uff
uff.from_tensorflow_frozen_model("lpr_it0.pb",output_filename="lpr_it0.uff")

Now I want to load it into [primary-gie] with the following properties:

[property]
gpu-id=0
net-scale-factor=1
model-color-format=0
uff-file=lpr2.uff
uff-input-blob-name=the_input
output-blob-names=softmax/truediv
labelfile-path=labels-lpr.txt
network-mode=2
num-detected-classes=1
gie-unique-id=1
is-classifier=0
maintain-aspect-ratio=1

I get this error during loading:

Using winsys: x11 
Creating LL OSD context new
0:00:00.805213106 13084     0x1d472840 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:01.195908042 13084     0x1d472840 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): UffParser: Validator error: dense2/unstack: Unsupported operation _Unpack
0:00:01.205150416 13084     0x1d472840 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed to parse UFF file: incorrect file or incorrect input/output blob names
0:00:01.205257448 13084     0x1d472840 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:01.205336147 13084     0x1d472840 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:01.205369585 13084     0x1d472840 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /opt/nvidia/deepstream/deepstream-4.0/sources/lpr/config_infer_lpr_gru.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
** ERROR: <main:651>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /opt/nvidia/deepstream/deepstream-4.0/sources/lpr/config_infer_lpr_gru.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
App run failed

How can I fix it? Is there straighforward approach for using .pb file in DeepStream SDK?

Hi,

You will need to convert the model into TensorRT supported format before feeding it into Deepstream.
This requires you to convert the .pb file into .uff first.

For the error log, there are some non-supported operation inside your model.

UffParser: Validator error: dense2/unstack: Unsupported operation _Unpack

The dense layer is not in the supported list of our uff parser and leads to this error.
You can find the detail support matrix in our document:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#supported-ops

Thanks.

This requires you to convert the .pb file into .uff first.
As you can see, from my message, I’ve already converted .pb to .uff with TensorRT UFF utility.
This .pb worked fine with tensorflow on Jetson NANO. How should I convert it to be able to run it from DeepStream SDK?

Hi,

There are some non-supported operations inside your model.
Deepstream doesn’t support TensorFlow frameworks so you cannot use .pb file as input.

Is there any possibility to update your model to other supported operation listed here:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#supported-ops

If not, another alternative is to implement the operation with TensorRT plugin API directly.
Here is a sample to demonstrate this use case:
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/

Thanks.