Hi,
I am wondering if I am doing this right or now, but I have some Qs about TX2 performance for object detection.
Following this example on jetson-inference (https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-camera-2.md), I saw that TX2 is capable of running/inferencing around 50-70 FPS. (I just followed the example. I didn’t change anything)
On the other hand, if I go through this example - https://github.com/NVIDIA/object-detection-tensorrt-example/blob/master/SSD_Model/detect_objects_webcam.py , I got about 1.63 FPS (I used imutil package from pip to measure the FPS).
My main goal for using the above example (SSD_Model) was so that I could use TensorRT model that gets generated from this guide - https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/text/overview.html. The model that I am trying to use is FP16 TensorRT using DetectNet_v2.
As 1.63 FPS seems extremely slow, I wanted to check to see if there is normal or if I implemented something incorrectly.
Could it be that this was due to the fact that the SSD example uses SSD model rather than DetectNet? If I wanted to test on DetectNet, what would I have to do test this out? Also, would you recommend using jetson-inference for production? If you could please help, I would appreciate it.
Thanks you