Can't load InsightFace Onnx model on TX2 by using TensorRT 4.1.3

HI,

I am using InsightFace(https://github.com/deepinsight/insightface) on Windwos with Tensorflow and I want to use it on TX2 now.

I use the Tensorflow-ONNX converter from https://github.com/onnx/tensorflow-onnx by using the command for Face Detect model and Gender-Age model.

python -m tf2onnx.convert --graphdef ./saved_model.pb --output ./frozen.onnx --fold_const --inputs pnet/input:0,rnet/input:0,onet/input:0 --outputs pnet/conv4-2/BiasAdd:0,pnet/prob1:0,rnet/conv5-2/conv5-2:0,rnet/prob1:0,onet/conv6-3/conv6-3:0,onet/prob1:0
python -m tf2onnx.convert --graphdef ./AGE_GENDER.pb --output ./frozen.onnx --fold_const --inputs data:0 --outputs output/BiasAdd:0

Then, I use the ONNX model on my TX2 and get the following error on the Face Detect

[2019-09-11 02:41:05   ERROR] Parameter check failed at: ../builder/Network.cpp::addInput::364, condition: isValidDims(dims)
rtspframebuffer: onnx/converterToTRT.h:211: nvinfer1::ITensor* nvonnxparser::Converter::convert_input(std::string): Assertion `input_tensor' failed.

and get the following error on the Face Detect.

[2019-09-11 02:41:47   ERROR] Parameter check failed at: ../builder/Network.cpp::addScale::113, condition: scale.count == 0 || scale.count == weightCount
rtspframebuffer: onnx/converterToTRT.h:156: nvonnxparser::TRT_LayerOrWeights nvonnxparser::Converter::convert_node(const onnx::NodeProto*): Assertion `layer' failed.

Does anyone know how to solve it?

Thanks.

Hi,

The simplest way is to use TF-TRT:
https://github.com/NVIDIA-AI-IOT/tf_trt_models

If you want to run the model with pure TensorRT, it’s recommended to convert the model into UFF rather than ONNX:
https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification

Thanks.

Hi,

Thanks @AastaLLL I’ll give it a try.

Thanks.

Hello Jack,
Did it work out with you?

Thanks …

My workflow is tensorflow->onnx->tensorRT now and it is work!

1 Like

Hello,

I also tried InsightFace(https://github.com/deepinsight/insightface) on [Windows and Ubuntu] in PC and both worked well!

Now I want to use it on TX2, and i installed Tensorflow from here https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html … so i wonder if i need to convert the model into UFF? and whether using tensorRT is mandatory in jetson tx2?

I’m on the process now but i wonder if my approach is fine?

Many thanks

Cool! So you managed to make the InsightFace works well in Jetson tx2 with this change “tensorflow->onnx->tensorRT”?

yes