I’m just starting out with the Jetson Nano (4G RAM) and I wanted to try out object detection.
I cloned the jetson-interence GitHub-Repository and followed the installation instructions.
Then I ran ./detectnet-console --network=ssd-mobilenet-v2 images/peds_0.jpg images/test/output.jpg
to try it out.
However it’s performance was significantly worse than in the demo video :
[TRT] ------------------------------------------------
[TRT] Timing Report networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] ------------------------------------------------
[TRT] Pre-Process CPU 0.07917ms CUDA 1.00812ms
[TRT] Network CPU 802.01288ms CUDA 800.92664ms
[TRT] Post-Process CPU 0.09828ms CUDA 0.09844ms
[TRT] Visualize CPU 40.18549ms CUDA 40.77422ms
[TRT] Total CPU 842.37579ms CUDA 842.80743ms
[TRT] ------------------------------------------------
Note: I executed sudo jetson_clocks
and sudo ldconfig
before this and the NV Power Mode is set to 0 (MAXN)
Some additional information may also be the following error printed during the execution:
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
As well as the following warning given:
[TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
Any ideas why this is would be appreciated, as I already looked at most of the related posts in this forum and I’m really a beginner to this.
Thanks!