MXNet to ONNX to TensorRT, how to interpret trt output to match mxnet output

I’ve a code in MXNET which I exported to ONNX, then from ONNX imported to TensorRT.

I’m using onnx-tensorrt( in order to run the inference.

I’ve got an output after using

trt_outputs = common.do_inference(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)

Also I’ve got an output when I do a forward pass in my MXNET (on that output I find the bboxes values for the face).

Question: How can I convert the TensorRT’s inference output to match the MXNET’s inference output so I can classify the faces with the bboxes?

Also, maybe I don’t look at the right place and I need to ignore MXNET’s output and interpret ONNX’s output and use that instead? (I also verified that ONNX has the same output)

can you share MXNet’s inference output and what TRT’s inference output?

also, what version of TRT are you using?

deleted post… deleted content…

I’ve replied with the post above. Hopefully you could help get more insights :)
DEBUG PICTURE: im_tensor vs im_tensor_transpose

I was able to get an inference from TRT now, but it’s not similar to the MXNET’s inference.

the number of bboxes I get on the same 30 images is 1361 with mxnet and 841 with trt.

Also, the output inference difference is very small, I think there’s still a simple pre-processing step I need to do to the input in order to get the same output of mxnet.

And another DEBUG PICTURE: data of mxnet vs data_trt

mxnet net output and trt inference output

to help us debug, can you share a small repro that contains the mxnet model and code to convert to trt and inferencing code that show the difference in inferencing results?

Hi again NVES, thanks for the reply.
I’d love the model & code to be kepy private, therefore, do you have a FTP or any private hosting that I could upload the code to? Instead of posting it in the board here.

you can DM me with a google drive share or dropbox link.


Highly appreciated NVES,
I’ve sent you a DM with the google drive link.

what version of TRT are you using?

Hi, thanks for the reply!
I’m using nvidia-docker 19.03:py3 (in my PyCharm PRO - so I can debug) which runs:
TRT version:

Updated you with the relevant files, in DM :)