Hi ALL,
Device: Jetson TX2 (JP3.3).
Tensorflow: v1.9.
I trained a fasterRCNN model using object detection API on my computer and got the inference graph but when I ran it using Jetson TX2 it gave me only 1 FPS , so I tried YOLOv3 it gave me 1.7 FPS
Q1: can I enhance the above FPS?
Q2: If I train my data on SSD-mobilenet-v1 can I guarantee a descent number of FPS?
please recommend anything
thanks in advance.
Hi,
You can run jetson_clocks to improve performance, also nvpmodel can improve performance:
[url]https://developer.ridgerun.com/wiki/index.php?title=Nvidia_TX2_NVP_model[/url]
We have a Gstreamer plugin that performs inference on different networks, here you can check some benchmarks results from some supported networks:
[url]https://developer.ridgerun.com/wiki/index.php?title=GstInference/Benchmarks[/url]
Regards.
Hi,
It’s recommended to reflash your device with our latest JetPack first.
There is around 1.5x speed up when upgrading software from rel-28 into rel-32.
You can find some benchmark result for Jetson here: (tested on the Nano)
https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks
We get 39fps on SSD Mobilenet-V2 + Nano.
Thanks.
Hi AastaLLL,
so my concern is that I want to work on ROS kinetic later which is supported by ubuntu 16.04 thats why I am using Jetpack3.3
For Now I am going to train my data on ssd-mobilenet and will get back to you when I have the results.