How to run faster_rcnn_inception_v2 trained object detection tensorflow model on Jetson tx2?

I have already configured Jetson Tx2 with tf_trtt_models and tensorflow successfully. I am able to import both tensorflow and tf_trt_models in python3 module on Jetson Tx2. I have faster_rcnn_inception_v2 trained object detection tensorflow model now I have couple of questions

  1. tf_trtt_models repository converts models whose config files resembles to ssd_mobilenet_v1_coco and ssd_inception_v2_coco model’s config files. So is there any other way to convert faster_rcnn_inception_v2 model into tr_tf_model? or am I missing something?

  2. what steps I need to follow in order to convert it into tf_trt_model? Can anyone please guide me through as it would really help me learn new concepts.

Please feel to reach out should you require more details thanks

Running https://github.com/NVIDIA-AI-IOT/tf_trt_models/blob/master/examples/detection/detection.ipynb with faster_rcnn_inception_v2 it gives me following error

InvalidArgumentError: node BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Slice (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) has inputs from different frames. The input node BatchMultiClassNonMaxSuppression/map/while/Reshape_1 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) is in frame 'BatchMultiClassNonMaxSuppression/map/while/while_context'. The input node BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Slice/begin (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) is in frame ''.

Hi,

The error seems to be a TensorFlow issue:

May I know which TensorFlow version do you use?
Would you mind to try the conversion on the latest package we shared this month?

Thanks.

Hi,

I am having jetpack 4.2 L4T 32.1.0 with tensorrt 5.0.6.3 and below is the tensorflow version installed
$ sudo pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.3

Hi,

Do you have any dependence on JetPack4.2?
If not, it’s recommended to upgrade your device to the latest first.

Thanks.

Hi,

Can you please explain what does dependence on jetpack 4.2 means? Thank you

Hi,

If you don’t need to use JetPack4.2, it’s recommended to reflash your device with JetPack4.4.x first.

Thanks.

Hi,

Installed latest jetpack version and tensorflow 1.15.4 container 20.10 successfully on jetson TX2. I am still getting InvalidArgumentError: node BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Slice (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) has inputs from different frames. The input node BatchMultiClassNonMaxSuppression/map/while/Reshape_1 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) is in frame 'BatchMultiClassNonMaxSuppression/map/while/while_context'. The input node BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Slice/begin (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) is in frame ''.
when I run faster_rcnn_inception_v2_coco in https://github.com/NVIDIA-AI-IOT/tf_trt_models/blob/master/examples/detection/detection.ipynb

Can anyone guide me through this it would really help continue my learning? This how I have reached this much far. is it even possible to convert faster_rcnn_inception_v2_coco model in tf_trt model
Thanks

Hi,

Since the GitHub doesn’t be updated for a while, it is quite possible that it doesn’t support the newer operations.
May I know your target first? Do you want to enable the model with TF-TRT or standalone TenosrRT?

It’s more recommended to use standalone TensorRT due to performance.
And you can find an example for Faster-RCNN here:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffFasterRCNN

Thanks.

Hi,

I am trying to deploy my custom object detection faster_rcnn model trained with tensorflow on jetson tx2 platform. Since faster_rcnn model is not supported by tf-trt model but as you mentioned it has the workaround of using a convert-to-uff utility with trained custom object detection faster_rcnn model on tensorflow. I believe I should give it a try for this approach.

  1. Can you please explain how tf-trt differs from convert-to-uff?
  2. In order to use trained custom object detection faster_rcnn model would I just need frozen_graph.pb(Inference graph of trained model) ? as it mentions about .pb file in the attached guide on prerequisites section 3 point 2 so I am just assuming that would be frozen_graph of trained model.

Thanks

Hi,

1.
TF-TRT use TensorFlow interface and moves some layers into TensorRT implementation if compatible.
This allows you to use the TensorFlow interface (pre-processing and post-processing) but requires much more resources.

For standalone TensorRT, there are two possible flow for TensorFlow based model:
[1] .pb → .uff → .plan
[2] .pb/.h5 → .onnx → .plan
In general, this requires a model conversion that represents the model in other format supported by the TensorRT(uff or onnx).

2.
Yes, please get the frozen file from your model and following steps to do the conversion.

Thanks.