Jetson Nano Object detection: missing "retrieved Input tensor "input": xxxx

Hi support team
I’m very new in Jetson Nano and object detection.
I bought a jetson nano 4GB board a few month ago, Now I tried to build and test object detection with Mobilenet version 2 as a sample link “Real-Time Object Detection in 10 Lines of Python Code on Jetson Nano - YouTube
I did my best every step finally, I found that ./detectnet-console.py images/xx.jpg output_0.jpg was not be able running properly. I noticed that it was missing "retrieved Input tensor “input”: xxxx the


screen and it caused the infinied loop on the console. I already attached the capture pictures for considering.
Please kindly assist on this issues.
Thank you
Banatus S

Hi @banatuss, the first time you load the model, it will take some minutes to optimize the network with TensorRT. After the first run, it should save the optimized model to disk and then load quickly on subsequent runs.

How long have you left it running for? It may look like an infinite loop, but TensorRT is probably just running it’s optimizations.

Thank you Dusty , I tried to run it again and watched it for 5 minutes.
Unfortunately, then it was caused the Jetson nano shutdown unexpectedly.
I checked the status no led power on at all.
It mighted be the first time , I was shocked and manully closed the terminal screen before it did optimization?
Can I fixed on this or format SD card and start to comply the OS again?
Thank you
Note: Do you have a docker link of NVIDIA object detection?
Please kindly give me some advice.
Banatus Soiraya

Hi @banatuss, it sounds like your power supply may not have been able to keep up with the load, so you may want to try switching your Nano to 5W mode (sudo nvpmodel -m 1) or try one of the DC barrel jack adapters from the Jetson Nano Supported Component List.

Is your Nano able to boot up again? If it still boots, try my power suggestion above and then re-run the detectnet program. If it doesn’t boot, you may want to re-flash your SD card.

Also, yes you can run this in a docker container, see here for more info:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md

Note that it will need to do the same TensorRT optimization the first time it runs the model though.

Thank you dusty
I was successful to use your command.It helped a lot.
Optimization process took about 5 mins until it was completed.