Deepstream Onnx inference no output

This output name is expected due to tao_tensorflow1_backend/nvidia_tao_tf1/core/export/_onnx.py at 2ec95cbbe0d74d6a180ea6e989f64d2d97d97712 · NVIDIA/tao_tensorflow1_backend · GitHub.

For tensorrt engine, you can run trtexec as below. For example,

$ trtexec --onnx=forum_303165.onnx --maxShapes=input_1:1x3x224x224 --minShapes=input_1:1x3x224x224 --optShapes=input_1:1x3x224x224 --saveEngine=forum_303165.engine --workspace=20480

You can check the engine with polygraphy. For example,

polygraphy inspect model forum_303165.engine
[W] 'colored' module is not installed, will not use colors when logging. To enable colors, please install the 'colored' module: python3 -m pip install colored
[I] Loading bytes from /localhome/local-morganh/bak_x11-0002/tf1_forum_298861/forum_303165.engine
[I] ==== TensorRT Engine ====
    Name: Unnamed Network 0 | Explicit Batch Engine

    ---- 1 Engine Input(s) ----
    {input_1 [dtype=float32, shape=(1, 3, 224, 224)]}

    ---- 1 Engine Output(s) ----
    {predictions [dtype=float32, shape=(1, 2)]}

    ---- Memory ----
    Device Memory: 136225280 bytes

    ---- 1 Profile(s) (2 Tensor(s) Each) ----
    - Profile: 0
        Tensor: input_1              (Input), Index: 0 | Shapes: min=(1, 3, 224, 224), opt=(1, 3, 224, 224), max=(1, 3, 224, 224)
        Tensor: predictions         (Output), Index: 1 | Shape: (1, 2)

    ---- 30 Layer(s) ----

For accuracy issue, it should be the same as Inference with tensorrt engine file has different results compared with trained hdf5 model. As a workaround, you can train a model in nvcr.io/nvidia/tao/tao-toolkit:4.0.0-tf1.15.5 and export to rerun.