tensorflow mobilenet object detection model in Tx2 is very slow?

I test the tensorflow mobilenet object detection model in tx2, and each frame need 4.2s, i think is unnormal,anyone can provide suggestion, thx.

Hi,

Please maximize TX2 performance first:

  1. Set nvpmodel to max-N
  2. Run jeston_clock.sh

Thanks.

Thank you for reply,

  1. when i open the tx2, and perform the object detection by tensorflow, each video frame need 3.5s.
  2. when i perform the ‘sudo nvpmodel -m 0’ and ‘sudo ./jeston_clock.sh’, each video frame need 4.5s.

So, i have no idea to solve this problem.

Do you use the swap memory when inferencing?

I will try it, thank you very much!

How big is the model? (How many parameters?)
Which inference runtime are you using? Are you sure you’re using the GPU?

There are few things I would like to share:

  1. In Object Detection using Deep Learning, the bottle-neck right now is post-processing (non-max suppression), not prediction. Hopefully we would see some improvements in this area soon. There are new paper this year using deep learning for non-max suppression. [1705.02950] Learning non-maximum suppression

  2. If you are using Keras/Tensorflow model. One thing you could do is to quantize your trained model for solely interference purpose. There is currently an active discussion on how to quantize model on ARM 64-bit here (Quantized graph fails to work on NVIDIA Jetson TX1 architecture although it worked on a normal PC? · Issue #9301 · tensorflow/tensorflow · GitHub)

@seancc Were you able to improve the speed, and how? Thank you

Hi,

We have release TensorRT 3 for supporting TensorFlow model.
You can get acceleration with TensorRT 3.

Please find here for more information:
[url]https://devtalk.nvidia.com/default/topic/1024441/jetson-tx2/tensorrt-3-0-rc-now-available-with-support-for-tensorflow/[/url]

Hi every one,
Can any one please give us details about “nvpmodel”?
how and what we can set in training and test.
Thanks

Hi,

Please check this page for information:

Thanks.