Also I’ve got an output when I do a forward pass in my MXNET (on that output I find the bboxes values for the face).
Question: How can I convert the TensorRT’s inference output to match the MXNET’s inference output so I can classify the faces with the bboxes?
Also, maybe I don’t look at the right place and I need to ignore MXNET’s output and interpret ONNX’s output and use that instead? (I also verified that ONNX has the same output)
I’ve replied with the post above. Hopefully you could help get more insights :) DEBUG PICTURE: im_tensor vs im_tensor_transpose https://imgur.com/5scbLCD
I was able to get an inference from TRT now, but it’s not similar to the MXNET’s inference.
the number of bboxes I get on the same 30 images is 1361 with mxnet and 841 with trt.
Also, the output inference difference is very small, I think there’s still a simple pre-processing step I need to do to the input in order to get the same output of mxnet.
to help us debug, can you share a small repro that contains the mxnet model and code to convert to trt and inferencing code that show the difference in inferencing results?
Hi again NVES, thanks for the reply.
I’d love the model & code to be kepy private, therefore, do you have a FTP or any private hosting that I could upload the code to? Instead of posting it in the board here.