I have installed SSD on a Jetson TX2 with Jetpack 3.1 and get around 4-5fps when running it with a webcamera. How come it is so slow on the TX2? And are there any ways to fix the low performance?
Are there any specific reason to why YOLOv2(15-20fps) performs much better than SSD(4-5fps) on the TX2?
Anyone know if there are any possible good available optimizations?
I have, to be honest, not used TensorRT before. Could you please explain how I should use TensorRT to improve the performance of SSD? By looking at the jetson-inference repo in Github it seems like I have to feed the caffemodel and prototxt to a jetson-inference program, for instance this https://github.com/dusty-nv/jetson-inference/blob/master/imagenet-camera/imagenet-camera.cpp, , but in the forum post you linked to it seems like the guy is modifying the layers in caffe. What am I supposed to do, make an external SSD-jetson-inference program or modify the layers in Caffe? Could you please give me some guidelines on how I can use TensorRT to achieve better performance with SSD?
What is your configuration? What SSD model, what underlying software?
Hi, I am using the SSD300 model, and have trained up the network with my own dataset. I am using the framework given here: https://github.com/weiliu89/caffe/tree/ssd. As for the underlying software, I am using Jetpack 3.1 so that means CUDA 8 and cudNN 6 i.e. most of the default software, correct me if I am wrong.
You can find how to use TensorRT to inference a Caffe model in Jetson_inference:
But there are some non-supported layers contained in SSD model.
For these layers, please implement it with TensorRT plugin API.
There are lots of discussion about plugin implementation in topic 1007313.
You can get more information there.