Jetson-inference object detection not producing detection(Output Window) window

Hi, I had followed complete AI fundamental course and tried multiple time fresh installation of jetpack even but still facing a problem.
My setup:
Hardware - Jetson Nano 4GB version with CSI camera
Software - Jetpack 4.5.1 latest

Complete installation of jetson-inference both method 1) building from source and 2) downloading and running the docker

Problem Facing:
When executing ./ --network=ssd-mobilenet-v2 images/peds_0.jpg images/test/output.jpg command inside jetson-inference/build/aarch64/bin/ folder it does not produce Output detection window.

I have attached the error.txt file. It is a copy of complete terminal window messages.

Kindly assist me.
Thanks and Regards

error.txt (467.1 KB)

Hi @sumanjha090, it looks like you stopped the process before the TensorRT model was done optimizing. The first time you load a model, TensorRT will take a few minutes to optimize the model for realtime performance. It then caches the optimized model to disk and so on further runs using that model, it will load instantly.

Try running it again, and leave it running until the TensorRT optimizer completes. It should then start the window and processing.

Thanks @dusty_nv for quick response. I tried and it worked after some 5 minutes of processing. Great