I noticed a few posts that are claiming poor performance including my own but one thing that isn’t mentioned is the fact that we’re all running these inference checks and benchmarks using the GUI desktop for Jetson Nano.
In reality if people are running inference at the edge I’d imagine that they would not have a monitor or GUI turned on.
Therefore I suggest providing information to run examples in headless mode/no GUI and even disable GUI boot since we know that even if you boot to only use SSH the system still allocates resources for the GUI. (I do this on my RassPi, disable GUI boot in BIOS)
I think users could benefit from this because this is how we’d run inference at the “edge” and we could potentially see the true FPS and benchmarks.
Specifically information for adding FPS indicator on post inference video operations so users can then watch the post processing fps.
Thanks for reading.
Hi Roberto, thanks for your feedback - note that the benchmarks don’t use GUI visualization and you can run them headless by not attaching a monitor to your Nano. The Hello AI World applications aren’t meant as benchmarks, as I have not optimized every nook and cranny of them - instead they are intended to be simple to follow and easy to use.
However, you can also run the Hello AI World applications headless, by not attaching a monitor to your Nano. The camera applications of Hello AI World will attempt to create an OpenGL display, but if one is not attached, they will skip the rendering. If you are connecting to your Nano over SSH, disable X11 forwarding, because it may still try to create a window over SSH tunnel if X11 forwarding is enabled.
In the recent updates to Hello AI World, I improved the granularity of the performance readout to include a breakdown of the pre/post-processing times, in addition to the core network time and the visualization time.