Deepstream tensor meta application running extremely slow

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - Jetson Nano
• DeepStream Version - 5.0
• JetPack Version (valid for Jetson only) - 4.3
• TensorRT Version - 7+

Hi,
I am using Deepstream tensor meta app with 1 pgie as yoloface and 1 sgie as facenet but it is running extremely slow even when used with its example pgie and sgie files. below i have attached the test app for reference.deepstream_infer_tensor_meta_test.cpp (29.0 KB)

Hi,

May I know which YOLO model do you choose?

For YOLOv3, Nano may roughly have 2~3 fps due to limited resource.
You can check this comment to enable it to 20 fps via periodical detection.

Another suggestion is to use YOLOv3 Tiny instead.
You can find the detailed performance below that it can achieve 49 fps on Nano.

Thanks.
Thanks.

i am sorry i should’ve mentioned it, it’s YOLOv3 tiny. @AastaLLL

Hi,

It’s expected to have a much better performance for YOLOv3 Tiny.
Do you maximize the device performance first?

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

Thanks

Yes, I have done that already, can you please check the code I’ve attached? I think there is some problem with the pipeline @AastaLLL

any update @AastaLLL regarding this issue

Hi,

We are trying to reproduce this issue.
Will get back to you soon.

Thanks.

1 Like

Hi shubham.shah09,

We try to test with your code, but got below errors:

$ ./deepstream-infer-tensor-meta-app ../../../../samples/streams/sample_720p.h264 
With tracker
Failed to load config file: No such file or directory
** ERROR: <gst_nvinfer_parse_config_file:1242>: failed
Failed to load config file: No such file or directory
** ERROR: <gst_nvinfer_parse_config_file:1242>: failed
Now playing...
Using winsys: x11 
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
0:00:00.244778126  9655   0x5599b1fc40 WARN                 nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<secondary1-nvinference-engine> error: Configuration file parsing failed
0:00:00.244839117  9655   0x5599b1fc40 WARN                 nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<secondary1-nvinference-engine> error: Config file path: ./dstest2_sgie1_config.txt
Running...
ERROR from element secondary1-nvinference-engine: Configuration file parsing failed
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(766): gst_nvinfer_start (): /GstPipeline:dstensor-pipeline/GstNvInfer:secondary1-nvinference-engine:
Config file path: ./dstest2_sgie1_config.txt
Returned, stopping playback
Deleting pipeline

Please share the “dstest2_sgie1_config.txt” file. Thanks!

sure, I am using yolo face as pgie, how will you reproduce that? ill attach the sgie file below. but the sgie is for facenet and its not allowing me to upload the files here as it is too big, deepstream_infer_tensor_meta_test.cpp (33.6 KB) dstest2_sgie1_config.txt (3.7 KB)

Hi shubham.shah09,

Please put on cloud drive and share on here.
Thanks!

Hello this link consists of all the files @carolyuu

Hi,

We can run your pipeline in our environment successfully.

May I know how do you measure the performance?
Do you calcuate the fps with NvDsAppPerfStruct?

Thanks.

What are the results you received? and yes, thats how i calculate FPS

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

How do you calculate the fps?
Do you have some customized implementation?

If yes, would you mind to share the source with us also?
We want to make sure the same source is used between us.

Thanks.