Deepstream-test3 streaming and inferencing

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**Xavier
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 4.5
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Questions

I generated a YOLOv3 engine with int8 precision from Xavier and inference the engine with “deepstream-app” command with 4 channels streaming. The results look pretty smooth and it achieves satisfactory frame rate at 35 FPS for each channel. Here are the configs for the “deepstream-app” command.
config_infer_primary.txt (3.4 KB)
source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt (4.9 KB)

Then I tried to reproduce the same result by running “deepstream-test3-app” for 4 streams inferencing but the results look terrible with frame rate less than 1FPS. Here are the code and config
deepstream_test3_app.c (16.1 KB)
config_infer_primary.txt (3.4 KB)

Would you mind explain the difference between these two approaches? What can I do to improve the throughput of “deepstream-test3-app” as is comparable to “deepstream-app”? Thank you.

How do you measure the test3 fps?

Hi bcao, for deep stream-test3, I simply found out by visually observing the output video on the screen. The output video is buffering with skipping a bunch of frames and almost displays only one frame out of 3 seconds.

Is there any update on this question?

You can refer DeepStream SDK FAQ - #9 by mchi to measure the fps

Thanks for your reply. I still don’t know what caused the buffering issue in inferencing deepstream-test3 other than deepstream-app. I used the same INT8 engine with batch size of 4. Would you mind helping with it?

I think we should make sure we have the same measurement between test3 and deepstream-app firstly

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.