Converting Openpose's CMU Tensorflow graph to Tensorrt

I use this Openpose’s Tensorflow version library https://github.com/ildoonet/tf-pose-estimation for Human pose detection. I like to run the model in Tensort for faster processing speed. CMU netowrk https://github.com/ildoonet/tf-pose-estimation/blob/master/tf_pose/network_cmu.py is quite straightforward and composed of conv, max_pool,and concat. All layers are supported in Tensorrt. The network has image is image input and output is concat_stage7.
When I run convert-to-uff, the output is

NOTE: UFF has been tested with TensorFlow 1.12.0. Other versions are not guaranteed to work
UFF Version 0.6.3
=== Automatically deduced input nodes ===
[name: "image"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: 3
      }
    }
  }
}
]
=========================================

=== Automatically deduced output nodes ===
[name: "Openpose/concat_stage7"
op: "ConcatV2"
input: "Mconv7_stage6_L2/BiasAdd"
input: "Mconv7_stage6_L1/BiasAdd"
input: "Openpose/concat_stage7/axis"
attr {
  key: "N"
  value {
    i: 2
  }
}
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "Tidx"
  value {
    type: DT_INT32
  }
}
]
==========================================

Using output node Openpose/concat_stage7
Converting to UFF graph
No. nodes: 463
UFF Output written to cmu/cmu_openpose.uff

Is that ok in conversion?

But when I do inference the speed is same as running Tensorflow model.

Then Tensorrt output is (276000,) dimension.

But Tensorflow model has 4D tensor.
self.tensor_output = self.graph.get_tensor_by_name(‘TfPoseEstimator/Openpose/concat_stage7:0’)
(?, ?, ?, 57)

What could be wrong?

The CMU Tensorflow graph file for conversion is here. [url]Dropbox - File Deleted

Can somebody help as I am rushing for my project deadline.
I used Tensorrt 5.1.5 GA. What is the difference between GA and RC?
Thanks

Hey, @edit_or were you able to convert the model?

Hi, Request you to share the ONNX model and the script so that we can assist you better.

Alongside you can try validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!