Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Xavier NX and Jetson NANO • DeepStream Version 5.1 • JetPack Version (valid for Jetson only) 4.5.1-b17 • TensorRT Version 7.1.3.0-1+cuda10.2 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) bugs • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
deepstream-test5$ ./deepstream-test5-app -c configs/test5_config_file_src_infer.txt • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi,
I can use deepstream-test5-app with MQTT MsgConvBroker to send inference result to mosquitto broker on Xavier NX DeepStream-5.0
successfully. However, it only send partial payload at execution beginning on DP 5.0/5.1 Jetson NANO and the message “PERF: 0.00 (0.XX)” repeated displayed endless. In the same environment, deepstream-test5-app can do inference successfully if I just disable sink1 type=6(MsgConvBroker).
Please help me and let me know any other information you need. Thank you.
The previous symptom means to send whole payload but only at execution beginning.
Let [streammux] batch-size=1 and [primary-gie] batch-size=4, it get normal PERF but short period. This settings result in that no any metadata sent out via mqtt protocol, and Wireshark capture show that mqtt handshake finished. It seems dp-test5-app not not give msgbroker metadata.
The config of my previous console log is [streammux] batch-size=1 and [primary-gie] batch-size=1, and the execution result normal PERF but short period. Please let me know what else information you need. Thank you very much.
But if you have engine file used, nvinfer will use the engine file directly, from your engine file, it is using batch size 4. and from your console log, you are using engine file directly. please let me know if we have some misunderstanding.
The setting disable-msgconv=0 is critical in my configuration and its default value is 0. I make mistake to set it 1. At the same time, [primary-gie] batch-size=1 is necessary.