How can I finetune the TensorRT faster RCNN Sample?

In Jetpack there is a TensorRT example for faster RCNN:
tensorrt/samples/sampleFasterRCNN/

Running this uses the model data at tensorrt/data/faster-rcnn/:

  • faster_rcnn_test_iplugin.prototxt
  • VGG16_faster_rcnn_final.caffemodel

How can I finetune the model for new classes? E.g. by using Digits?

How can I change the resolution of the rcnn input image to a square image?

How many fps can I expect running it on a Jetson TX2?

Hi,

Please check our document for the answer.

In /usr/share/doc/tensorrt/TensorRT-3-User-Guide.pdf.gz:
[i]3.9. SampleFasterRCNN - Using the Plugin Library

In this sample you will learn:

How to implement the Faster R-CNN network in TensorRT
How to perform a quick performance test in TensorRT
How to implement a fused custom layer
How to construct the basis for further optimization, for example using INT8
calibration, user trained network, etc.).

[/i]
Thanks.

Ok, i checked the User Guide again.

It shows basically how to run the example and explains the differences to the original papaer / prototxt implementation.

My question was how I can finetune (=training) this network?
The docs do not answer this!!

Do you have any idea how to do this, @AstaLLL

Hi,

The model is trained by the faster R-CNN author and is slightly modified in RPN and ROIPooling for TensorRT plugin interface.

Faster R-CNN training requires author’s custom Caffe branch.
Check here for more information:

Thanks.

Thanks for the answer. I will try to finetune using the repo and report back the success (hopefully) :)

@plieningerweb any success? :-)

sorry, didnt have time to test it yet.

But checked the url of the downlaoded model and I think I remember the same URL is also used in the example of the py-faster-rcnn repo.

So guessing from this, it should work. But as I said, not confirmed yet.

What about you?

I am just getting started with the Jetson platform.
Today I managed to compile Tensorflow and run pretrained object detection models over video snippets with TF Object Detection API and OpenCV on the TX2. No TensorRT at the moment. The performance would really benefit from TensorRT though.

Could you maybe post some frames per seconds numbers? (fps) for the object detection examples with name of the model or link? We wanted to try the same but have focused on tensorrt now first…

Would be awesome and helpful for a lot of people I imagine!

Sure - For SSD MobileNet pretrained on COCO (taken directly from the modelzoo) and deployed in TF on the TX2 I get about 5 FPS while processing a video. I wanted to try out larger models as Faster R-CNN but it seems to kill the process (requests too many resources). Not really sure why this is a problem on the TX2 since it has more GPU memory than my own GTX1060 6 GB. But I need to get that going, since that will be my go-to model!

Please let me know if you advance any further with TensorRT. I got TensorRT (Python) going today and will start investigating porting models to TensorRT within the next few weeks, however I am not too optimistic about it yet :-)

Hi,

We also check TensorFlow object detection API. There is a control type operation (tf.where) run slowly on GPU.
We are checking if there is any WAR to place this operation on CPU for better performance.

Will update information here once we have an update.
Thanks.

Here is an update for TensorFlow object detection API:
https://devtalk.nvidia.com/default/topic/1027819/jetson-tx2/object-detection-performance-jetson-tx2-slower-than-expected/post/5235663/#5235663

Thanks.

Hi all,

Any progress with TensorRT? I’d be happy to port any object detection model to TensorRT but the docs aren’t really helpful. Did anyone manage to port that Faster R-CNN sample that is shipped with TensorRT?

Cheers!

Hi, has anyone got anything? Here too looking for Faster-RCNN implementation for detecting smaller objects using tensor-RT, caffe and Jetson-Tx2. If anyone could help…