Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Xavier NX and Jetson NANO
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1-b17
• TensorRT Version 220.127.116.11-1+cuda10.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
deepstream-test5$ ./deepstream-test5-app -c configs/test5_config_file_src_infer.txt
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I can use deepstream-test5-app with MQTT MsgConvBroker to send inference result to mosquitto broker on Xavier NX DeepStream-5.0
successfully. However, it only send partial payload at execution beginning on DP 5.0/5.1 Jetson NANO and the message “PERF: 0.00 (0.XX)” repeated displayed endless. In the same environment, deepstream-test5-app can do inference successfully if I just disable sink1 type=6(MsgConvBroker).
Please help me and let me know any other information you need. Thank you.
Hey, so it can work well on Xavier/NX, but cannot work well on Nano?
Yes, my settings and libmqtt.so work well on Xavier/NX. But it only send out inference message at beginning after I change to use Jetson/Nano.
Please check console log after EN_DEBUG enabled. Thanksdebug.log (4.0 KB)
The previous symptom means to send whole payload but only at execution beginning.
Let [streammux] batch-size=1 and [primary-gie] batch-size=4, it get normal PERF but short period. This settings result in that no any metadata sent out via mqtt protocol, and Wireshark capture show that mqtt handshake finished. It seems dp-test5-app not not give msgbroker metadata.
nvdp_test5_20210318.log (1.5 KB) !
Please guide how to do it. Thank you very much.
Please check CPU/GPU utility for this setting by tegrastats, and change pgie batch size to 1 and see if any improvements.
The config of my previous console log is [streammux] batch-size=1 and [primary-gie] batch-size=1, and the execution result normal PERF but short period. Please let me know what else information you need. Thank you very much.
But if you have engine file used, nvinfer will use the engine file directly, from your engine file, it is using batch size 4. and from your console log, you are using engine file directly. please let me know if we have some misunderstanding.
Sorry, I am NV deepstream beginner, so I enclose the test5 config and console log again. Thank you again.
nvdp_test5_20210319.log (1.5 KB)
test5_config_file_src_infer_20210319.txt (6.0 KB)
The setting disable-msgconv=0 is critical in my configuration and its default value is 0. I make mistake to set it 1. At the same time, [primary-gie] batch-size=1 is necessary.