TensorFlow Faster RCNN model on Tx2 using Tensor RT

HI
Can i run faster rcnn model generated from tensorflow using tensor RT on tx2 . Please share the procedure for this

Hello,

The included tensorrt/samples/sampleFasterRCNN/ is TF based. The general procedure is similar. If you are encountering specific issues, please let us know.

regards,
NVIDIA Enterprise Support

Hi when i went to the location of the mentioned place the files i found were in c++ and caffe version . can you pls let me know how to build the tensorflow version

Hello,

From a standard Linux TensorRT tar installation, you can find the samples (including samplefasterRCNN) at \TensorRT-5.xxx\targets\x86_64-linux-gnu\samples

If you are using NVIDIA Graphics Cloud (NGC) TensorRT container, then you’ll find the sample at: /workspace/tensorrt/samples/sampleFasterRCNN

Hi ,
This is only available in Tensor rt 5 version ? my end application is using Faster RCNN + Inception v2 model built from tensorflow , would i be able to easily deploy it on the tx2 board

Hello,

the faster rcnn example is availabe for trt 4.x as well.

HI ,
Apologies for asking again but i am not able to find the tf based version of fastercnn+ inception v2 in the mentioned location , can you pls elaborate the version and location where i can find it

Hello,

are you saying you can’t find the fasterrcnn sample described here? https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_210/tensorrt-user-guide/index.html#fasterrcnn_sample

Hi ,

I am able to find the faster rcnn files in the in the mentioned path which are all caffe version, but i am not able to find anything on the tensorflow version which i can take it and run my model which was trained with both faster rcnn + inceptionv2