Increasing fps on Jetson TX2 for a Tensorflow algorithm

Hi ,

I am working on a Real time Object detection algorithm using Tensorflow with GPU support v1.0.1 on Jetson TX2.
After using the below commands:
sudo nvpmodel -m 0 sudo ./jetson_clocks.sh

The algorithm is taking 0.8 to 0.9 seconds to detect one frame.

How can I increase the fps to 17-19?

Kindly help.

Hi,

We have an object detection sample with DetectNet and TensorRT:

It can reach around 11-fps with TX2 maximized frequency.
Thanks.

Hi AastaLLL,

Thanks for your response.
I am concentrating on increasing the fps of my current working algorithm as it is fine tuned for automotive on-road object detection.

Is there any other way I can increase the fps?

Kindly help

Hi,

Suppose you want to use a TensorFlow model on Jetson.
Here are some advices for you:

1. Try TensorRT:

2. Profile your model first.
If an op run slowly on GPU(ex. tf.where), try to place it on CPU.

Thanks.

Hi AastaLLL,

It take 17 seconds per frame while placed on the CPU whereas 0.8 seconds/frame when run on the GPU.
The code is written in python. I have tensorRT v3.0 installed on Jetson TX2. How do I use UFF format to make it compatible with tensorRT?

I did run the detectnet code from the link shared - https://github.com/dusty-nv/jetson-inference
Is there anyway to train the model using our own dataset and checkpoint file ?
For instance -(VGGnet_fast_rcnn_iter_70000.ckpt)

Thanks,
Pratosha

Hi,

To launch TensorRT with a TensorFlow model on Jetson, it requires following steps:

1. Convert TensorFlow model to UFF format

  • Require x86 Linux platform
  • Python interface
  • Sample is located at '/usr/local/lib/python2.7/dist-packages/tensorrt/examples/tf_to_trt/'

2. Create an TensorRT engine with UFF file

  • Can be applied on Jetson
  • C++
  • Sample is located at '/usr/src/tensorrt/samples/sampleUffMNIST/'

You can re-train the detectNet model with DIGITs and tutorial can be found here:
https://github.com/dusty-nv/jetson-inference#digits-workflow

Please noticed that detectNet is a caffe-based model and use nvcaffe_parser rather than uff_parser.

Thanks.

Hi AastaLLL,

Thanks :) will surely try