Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU):JETSON • DeepStream Version:6.3 • JetPack Version (valid for Jetson only):5.1.2 • TensorRT Version:8.4.*
when I Run the multi model inference example code and print the following content after loading two models. As long as it is not printed, the stream will not be pushed. Why is this? How can I solve it from the code? I am promoting the RTMP stream, and the element connections after OSD are nvvideoconvert ->nvv4l2h264enc ->h264parse ->flvmux ->RTMP link.
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 4
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 0
NVMEDIA: Need to set EMC bandwidth : 188000
NVMEDIA: Need to set EMC bandwidth : 188000
NvVideo: bBlitMode is set to TRUE
Can I explicitly initialize the above content in the program? When multi-threaded running, the above content will fail to initialize and will not print. Causing my model inference code to malfunction。
No. And that’s not the cause of your problem. You can attach your whole pipeline and logs first. We can start by trying to locate the rootcause of your problem.
thank you for your reply, how kind of you, it’s so important for me。The last question is, when writing pipeline related code, should we use bin connection or element connection for shorter connection time?