Jetson Nano performance issue

I am attaching a file with some benchmarks with the default deepstream caffemodel and tested on python bindings particularly deepsream-app-test3, also the video file used is also the sample video that comes with deepstream.

I have many questions but let’s start with one while running this config file through deepstream-app -c <config.txt>

source8_1080p_dec_infer_resnet_tracker_tiled_display_fp16_nano.txt I get around 24 fps on 8 video file

The same if running from a python deepstream-app exactly deepstream-testapp3 with default pgie_config setting and caffemodel I am not getting the same fps.

What am I missing or doing wrong?

I Have a custom model made in detectnet resnet10 but it performs poorly as compared to the caffemodel that’s altogether a different topic discussion.

I just need the same fps in my python deepstream app on how to achieve that.

All tested on jetson nano 4GB

benchmark.ods (16.5 KB)

Please check this, make sure you use same configuration properties to achieve the same performance.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Performance.html#deepstream-reference-model-and-tracker
About your model performance, you can use trtexec to check your model inference performance first. and to compare with builtin model performance, you also need to test builtin model inference performance using trtexec.
which is under /usr/src/tensorrt/bin/trtexec

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.