Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU)
GPU with Tesla T4 • DeepStream Version
Docker images with deepstream6.1.1-triton • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only)
Driver Version: 515.65.01 CUDA Version: 11.7 • Issue Type( questions, new requirements, bugs)
questions:
I try to send the msg detect from my own model, but it seems that I can not get the correct information. Here I provide the setting and msg details from kafka:
setting details:
I do not set the vehicle classification in the model, but just two models of myself.
I have checked the official command successfully, but failed in replace the setting to my own models.
And here is my setting: dec_parallel_infer.yml (6.5 KB)
To narrow this issue, please add logs in generate_event_message_minimal in opt\nvidia\deepstream\deepstream\sources\libs\nvmsgconv\deepstream_schema\eventmsg_payload.cpp, rebuild the code according to /opt/nvidia/deepstream/deepstream/sources/libs/nvmsgconv/README, you can backup the old /opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so, then replace the old lib.
I have check the eventmsg_payload.cpp,and find it works correctly. So I do not rebuild the libnvds_msgconv.so.
And I tried to add the print function to check whether I got the correct detection data, and it seems worked. But I still can’t find the position data in kafka。。
I just follow your suggestion by rebuild the lib, but just add the log setting at dsmeta_payload.cpp not eventmsg_payload.cpp because I found my setting of msg-conv-msg2p-new-api to be 1 in the sink.
And the results are shown below:
add the print log in function of osd_sink_pad_buffer_probe in deepstream_parallel_infer_app.cpp
I’m still confused on it
Could you help to clarify how to get the frame source with and height? I found it to be kept zero.
And I will try to follow this code to find the reason. It is really hard to understand the data of pipeline became zero suddenly after checking some frames.
after nvmultistreamtiler, the frame meta info including source dimensions will be dumped, you can add probe on nvmultistreamtiler sink and src to confirm.
currently, there are two solutions.
use msg-conv-msg2p-new-api =0.
use msg-conv-msg2p-new-api = 1, msgconv plugin needs to be in front of nvmultistreamtiler, as the comments in source4_1080p_dec_parallel_infer.yml said, " sink type = 6 by default creates msg converter + broker. To use multiple brokers use this group for converter and use sink type = 6 with disable-msgconv : 1", please set disable-msgconv: 1, and enable message-converter.
Thanks! I had set the msg-conv-msg2p-new-api =0 before, and got the correct position data and wrong label (all the labels are Vehicle). I had provided the snapshot before.
I will try the second solution to see if it works.
please dump the gstreamer pipeline graph to check if msgconv is in front of tiler.
if msgconv is in front of tiler, the frame_meta->source_frame_width and frame_meta->source_frame_height should not be zero, then scaleW will not be zero, please add logs to check.
it is not clear, please attach zip, and if msgconv is in front of tiler, the frame_meta->source_frame_width and frame_meta->source_frame_height should not be zero, then scaleW will not be zero, please add logs to check.