So I’m using a TX2 to work with some object detection software. I"m really just learning right now, and it seems there are much better object detection software out there such as YOLO. However, I want to stick with detectnet for now, as the code is simpler to work with and understand.
I recently followed the instruction here
https://github.com/nvidia/digits/tree/master/examples/object-detection
to create a object detection model for vehicles. Using the supplied images and labels on the page. I processed it using DIGITS on an AWS instance.
Once I got the model, only took 1.5 hrs with 16 GPU’s, I ran a quick test using detectnet and a single image takes almost 1.5min to process. It is held up at
[GIE] building CUDA engine
. I have still frames from video I captured, and I’m running a bash script to insert each image into the Detectnet-Console executable. When I ran this test with the default person detection, each image processed in a few seconds, 3 or 4. Which isn’t fast, but I figured no big deal.
Now that the time for each image is well over a min, it will take hours to do even a small sample size.
Is there any way to pick up the speed on how each image is processed?