Use TensorRT without any optimization

I was experimenting with some optimization in tenorRT, with jetson TX2, and at some point the result of my model is much different than what I expected. I would like to ask if I can run tensorrt without any optimization, to check if my model works as intended or not?

I would start by starting with a really small chunk of your model then slowly adding more and validating at each step that the output matches.

That seems to be a good idea. I am using the python API for testing now, and for some reason I have this error:

[libprotobuf FATAL google/protobuf/stubs/common.cc:79] This program was compiled against version 3.4.0 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.5.1).

Any idea how to fix it?

You need to uninstall protobuf-3.5.1 and install 3.4.0.
release is:
https://github.com/google/protobuf/releases/tag/v3.4.0

You need to compile and install by yourself just follow the instructions on github

I just realized that my input for tensorflow is in BGR order, since opencv read images that way, and the difference between that and tensorrt is probably taking the input in RGB right now. Result still seems off after I put both in RGB order.

Does order of color channel matter in tensorrt? Does it make a huge different?

Does order of color channel matter in tensorrt? Does it make a huge different?

Probably not a whole lot for computer-vision models.

Do make sure you are feeding your input as CHW format though

Thank for the reply, I have a quick question though. The model I used is YOLO, and the original order is NHWC. The input, however is a different story.

For python API (ran on GTX 1050 Ti), I simply read the image in HWC order, and it works just fine, and produce some output.
For C++ API (ran on Jetson TX2), the input images is feed in CHW order. The problem is that both the Python API and C++ API seems to produce the same output.

I have also mess with the UffInputOrder flag, seems to do nothing at the moment. Any idea where my fault could be.

Oh and thanks a lot thomas, your answer was very helpful.

Please ignore that, thanks to your suggestion I realized that it is actually my fault for not checking carefully. I did not put the picture in the right order, it is giving me some good result now, even though slightly off from my pure keras model.

Thanks, Thomas, it was a great help.

@dht7166, as you said, your model was originally trained with HWC order input, so did you feed your image in CHW order or HWC order to your TensorRT engine to get the good result? I am looking forward to your reply. Thanks!

The image has to be in CHW order. And you might want to have the color channel similar to what you used to train as well.

@dht7166. Thanks for the reply. But my TF model was trained with HWC order, that is input image is HWC and conv2d is also HWC. Before converting the TF model to UFF, do I need to modify the TF graph by taking CHW as input and then transposing it to HWC, and then convert it to UFF? Right now, the converted UFF model shows the input order is NHWC in the .uff.pbtxt file, which is not CHW order required by TensorRT, and I got wrong detection result at present.