Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)**Xavier
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 4.5
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Questions
I generated a YOLOv3 engine with int8 precision from Xavier and inference the engine with “deepstream-app” command with 4 channels streaming. The results look pretty smooth and it achieves satisfactory frame rate at 35 FPS for each channel. Here are the configs for the “deepstream-app” command.
config_infer_primary.txt (3.4 KB)
source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt (4.9 KB)
Then I tried to reproduce the same result by running “deepstream-test3-app” for 4 streams inferencing but the results look terrible with frame rate less than 1FPS. Here are the code and config
deepstream_test3_app.c (16.1 KB)
config_infer_primary.txt (3.4 KB)
Would you mind explain the difference between these two approaches? What can I do to improve the throughput of “deepstream-test3-app” as is comparable to “deepstream-app”? Thank you.