It takes 2 minutes for deepstream_python_apps to start.

https://github.com/NVIDIA-AI-IOT/deepstream_python_apps

I tried “deepstream-test1 (resnet10.caffemodel)” in the above sample.
It works correctly, but it takes 2 minutes each time to start up with JetsonNano.

generateTRTModel (): INT8 not supported by platform.Trying FP16 mode.

This log is displayed while waiting.

Since Nano doesn’t support INT8, it can’t be helped,
but Is there a way to get it up and running quickly?

Hi,
By default the setting in dstest1_pgie_config.txt is for Jetson Xavier. You need to modify it for Jetson Nano. There is a guidance for modifying deepstream-test3:
https://devtalk.nvidia.com/default/topic/1058597/deepstream-sdk/-nano-deepstream-test3-app-not-working-as-expected-for-multiple-video-source/post/5368352/#5368352
Same for deepstream-test1. FYR.

Thank you very much DaneLLL for your advice.

The “deepstream-test1 (resnet10.caffemodel)” demo
started within 5 seconds with reference to the information you provided.

It was very helpful!