The various examples of detectnet-console works just fine. But when I run the detectnet-camera for the first time, the framerate was super slow, UI indicates 1fps but I suspect it to be slower than that because it wasn’t registering any movements until much later.
And I had to eventually shut it down and let it rest, but when I tried detectnet-camera again, it just turned to the board off.
Now, I’m not sure how to proceed. I was really looking forward to making the object detection and the human pose estimation work. But it seems like I am stopped even before I get to start.
imagenet-camera works well. I was getting around 15FPS, but when I tried to run detectnet-camera, I think it hanged/taking a really long time to process images.
I’ve also ran “nvgstcapture”, and it shows the feed just fine.
When I inspect the logs of both imagenet-camera and detectnet-camera, there’s a line that’s distinctly different:
imagenet-camera: camera open for streaming
GST_ARGUS: Creating output stream
detectnet-camera: failed to capture frame
detectnet-camera: failed to convert from NV12 to RGBA
Okay this is rather odd. After rebooting the Jetson Nano, and still using a 5V 2A Samsung Charger, imagenet-camera ran at 15fps, and then detectnet-camera ran at 5fps. The camera feed was decent, but the detection was lagging behind. I’m not sure what exactly changed here but it’s working now…