As 1.63 FPS seems extremely slow, I wanted to check to see if there is normal or if I implemented something incorrectly.
Could it be that this was due to the fact that the SSD example uses SSD model rather than DetectNet? If I wanted to test on DetectNet, what would I have to do test this out? Also, would you recommend using jetson-inference for production? If you could please help, I would appreciate it.
The SSD sample reads the camera via default OpenCV, which is slow since it is a CPU implementation.
It’s recommended to try our Deepstream SDK for a better performance first.
I looked into DeepStream doc, and I looked through dev forum to figure out how to hook up FLIR Machine Vision USB3 camera + FLIR’s PySpin SDK & DeepStream.
It seems like FLIR cameras are not compatible with DeepStream as they lack GStreamer support, and DeepStream needs USB (accessed through /dev/video0) cameras.
Would you have any other suggestion on what can be done to improve FPS on the TX2 board?
The bottleneck comes from CPU based camera reader.
To get a better performance, usually it need to fetch the camera data directly to the GPU accessible buffer.
This can save you some memory copy time as well as accelerated it by using GPU for pre-processing.
However, it seems like your camera cannot support GPU accessible buffer (either GPU buffer or pinned CPU buffer).
So it’s recommended to confirm this with your camera vendor first.