End-to-end TensorFlow MNIST example in DeepStream 4.0.

I am trying to integrate the UFF model that is obtained by running the end_to_end_tensorflow_mnist example from TensorRT 5.1 in DeepStream 4.0 on a Jetson TX2.

I have successfully been able to obtain the UFF model file lenet5.uff. The output of running convert-to-uff was as follows:

Loading models/lenet5.pb
NOTE: UFF has been tested with TensorFlow 1.12.0. Other versions are not guaranteed to work
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.3
=== Automatically deduced input nodes ===
[name: "input_1"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: -1
      }
      dim {
        size: 28
      }
      dim {
        size: 28
      }
      dim {
        size: 1
      }
    }
  }
}
]
=========================================

=== Automatically deduced output nodes ===
[name: "dense_1/Softmax"
op: "Softmax"
input: "dense_1/BiasAdd"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
]
==========================================

Using output node dense_1/Softmax
Converting to UFF graph
DEBUG: convert reshape to flatten node
No. nodes: 13
UFF Output written to models/lenet5.uff

From this, I deduce the input dimensions, the input blob name (input_1) and the output blob name (dense_1/Softmax). Then, I have created 2 configuration files. First, config_infer_primary_lenet5.txt, based off of config_infer_primary_nano.txt:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
batch-size=8
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
uff-file=../../models/lenet5.uff
input-dims=-1;28;28;1
uff-input-blob-name=input_1
output-blob-names=dense_1/Softmax

[class-attrs-all]
threshold=0.2
group-threshold=1
## Set eps=0.7 and minBoxes for enable-dbscan=1
eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Second, I have created source12_lenet5_tx2.txt, based on source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt for which I have only modified the section on the primary GIE as follows:

[primary-gie]
enable=1
gpu-id=0
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_lenet5.txt

However, when I try to run this app using deepstream-app -c source12_lenet5_tx2.txt, I obtain the following error messages:

Creating LL OSD context new
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
0:00:01.363415824 25430     0x1b70ecd0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:01.372738769 25430     0x1b70ecd0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): Parameter check failed at: ../builder/Network.cpp::addInput::465, condition: isValidDims(dims)
0:00:01.372870480 25430     0x1b70ecd0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): UFFParser: Failed to parseInput for node input_1
0:00:01.373262252 25430     0x1b70ecd0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): UffParser: Parser error: input_1: Failed to parse node - Invalid Tensor found at node input_1
0:00:01.373576713 25430     0x1b70ecd0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed to parse UFF file: incorrect file or incorrect input/output blob names
0:00:01.373644776 25430     0x1b70ecd0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:01.373718919 25430     0x1b70ecd0 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:01.373758535 25430     0x1b70ecd0 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app/config_infer_primary_lenet5.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
** ERROR: <main:651>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app/config_infer_primary_lenet5.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
App run failed

It seems to be running into dimensionality issues. However, I don’t know how to proceed from here. Any assistance would be appreciated.

The output of deepstream-app --version-all is as follows:

deepstream-app version 4.0
DeepStreamSDK 4.0
CUDA Driver Version: 10.0
CUDA Runtime Version: 10.0
TensorRT Version: 5.1
cuDNN Version: 7.5
libNVWarp360 Version: 2.0.0d5

Instead of input-dims, the config should use uff-input-dims, as per the plugin manual:

[url]NVIDIA Metropolis Documentation.

The format of this parameter is as follows:

uff-input-dims=CHANNEL;HEIGHT;WIDTH;INPUT-ORDER

Possible values for INPUT-ORDER are:
0: NCHW
1: NHWC

Hi pdboef,
I also encountered the same problem. I want to know your uff-input-dims=-1,28,28,0 or =3,28,28,0?
thanks
(hi again, now I solved my problem)

hi,sdtzmtt,
i also encounter this problems,i want to know how do you solved it.Can you please tell me?thank you!

how to run .pb model file using deepstream 5.0 ?