[Solved] TensorRT3 - Tensorflow implementation in PX2

It works(produces output) if I structure my input as DimsCHW(28, 28, 1), however, as per the instructions here https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#mnist_uff_definenetwork

You also need to register the input dimensions with NCHW format (even if the exported network was in NHWC):
parser->registerInput("Input_0", DimsCHW(1, 28, 28));

I have also tried to modify my network to take NCHW format input, even in that case it doesn’t work.

Dear dhingratul,
Can you please create bug with all the steps to reproduce new uff file for your model.
Please share bug ID, I will look into this issue.
Please login to https://developer.nvidia.com/drive with your credentials. Please check MyAccount->MyBugs->Submit a new bug to file bug.

BUG ID 2108877

^ Resolved, problem with the “freezing” function.

Dear dhingratul,

Thanks you for your update.

Hi SteveNV, SivaRamaKrishna,

unfortunately I did not get any answers to my questions from my previous post.

Are plugin layers supported by the .uff model parser for TF models?
Is INT8 calibration supported for the engine of a .uff model with plugin layers on PX2?
Which steps can be done in Python, which have to be implemented in C++?

I’m using TensorRT 4 RC and DriveWorks 5.0.5.0b

Thank you.

Hi SteveNV, SivaRamaKrishna,

I also have the same question with Maximilian.

1.after I copy a UFF file to PX2, how could I use driveworks C++ API to do inferencing?

2.In the sample of object detection, I only see you Nvidia load a tensorrt_engine.bin to do inferencing. But how can I get such a bin file from UFF model?

Dear patrick-wu,
We support UFF model–> tensorRT model using tensorRT_optimizer in next PDK release(in september). However, please note that if the DNN requires TRT plugin layers, unfortunately you can not integrate such networks in DW.