Hardware Platform (Jetson / GPU) - Jetson orin nano • DeepStream Version - 7.0 • TensorRT Version - 8.6.2.3 • CUDA - 12.2 • Issue Type( questions, new requirements, bugs) - Question
Hi, I am trying to integrate nvmsgconv and nvmsgbroker pipline to send meta data using AMQP protocol. I setup everything with the help of deepstream_test4 sample app. when I ran the test app it works fine.
However when I integrate those plugins to my pipeline, it starts and working But I don’t receive any metadata. all the elements seems to be linked properly. To give a bit more context, I am trying to process a RTSP stream and I added these plugins as a separate branch where one branch do the streaming and other to send the metadata.
I used a tee element and two queue elements, where as one goes to streaming other goes like tee → queue ->nvmsgconv ->nvmsgbroker. And after this is integrated streaming also is not working. I couldn’t figure out a reason for this. My app is in c++.
Can you guys provide tips to debug this issue, I even tried using GST_DEBUG log level. please provide a detail instructions to find the issue and fix it. Thanks in advance
Basically after nvinfer, we use nvtracker, nvds analytics, nvosd and then we have nvvideoconvert , caps filter and tee element where it creates two branches. one goes to udp sink which already works fine without the nvmsg conv and nvmsg broker integration. so now after integrating nvmsgconv and nvmsgbroker using a separate queue to the tee element, it doesn’t seem to work. I used AMQP adaptor for nvmsg broker and the configurations are same as deep stream test 4 app.
And for the context, when I run deep stream test 4 for video file source, it works. So I just want to get the deep stream default meta data (NVDSbatchmeta) using the above custom pipeline. could you please help me with this? Thanks
pipeline seems to be blocked after some buffer, I tried to print the metadata from osd sink buffer probe function, bur I don’t seem to get anything printed on terminal (But it worked without the integration of these plugins)
So, here I just tried using a tee → queue → fake sink as a one branch and tee- > queue → nvmsgconv → nvmsgbroker as other branch like this and it works and I am able to consume the metadata using a consumer which means the broker is working. and as in the pipeline graph when I extend the first branch to use UDP sink, the data flow is getting stopped/freeze . Do you have any suggestion to find the element that is causing issue in that branch,? I just want to have the proper dataflow in both splitting branches after the tee element
if only linking udpsink to tee and no "nvmsgconv " branch, will the pipeline freeze? Here is udpsink use sample:
… ! nvv4l2h264enc bitrate=50000 control-rate=1 preset-id=3 profile=0 tuning-info-id=1
! rtph264pay
! udpsink host=224.224.255.255 port=8005
I tried this, so yeah without nvmsgconv branch, udpsink branch linked to tee element works. and I used x264enc as the encoding element since we don’t have a hardware encoder in Jetson orin nano.
I couldn’t still solve dataflow/pipeline stalled issue? any more tips I can try, and I even tried to increase the queue size for the nvmsgconv branch which doesn’t seem to work. and do we get any other issue when using x264enc encoder and udp sink, in this two branch scenario?
could you share the result of “gst-inspect-1.0 x264enc|grep tune -a 10”, here is my test log1025.txt (1.4 KB).
from the doc, the x264enc should have property tune.
yes you are right, it is actually a property but when I add to c++ file it gave that warning.
and this is the result of the command you asked. seems like it is not there
noticing you are testing on host, please refer to this link for how to install gstreamer. the gstreamer version should be 1.20.3 if using DS7.0. you can use “dpkg -l |grep GStreamer” to check.
or you can use deepstream docker, which already install all components.
when I checked my system g streamer version it is 1.20.3 and when I used gst-inspect-1.0 x264enc command here is the result gst-inspect-x264enc.txt (13.0 KB)
I think when I try to set this “tune” property it gave this warning. any idea how can I proceed with this? and this is the result when I ran dpkg -l |grep GStreamer command result.txt (4.8 KB)
I am extremely sorry, setting this property works, since I am using c++ to develop my pipeline while setting this property I had an extra space in the string for tune like "tune " which caused the warning which I noticed just now. Now this is working.
So I have a question since we set this tune property to zero latency, do we get low quality encoded stream and if yes do we have any other ways to solve this problem without reducing the encoding quality much.
And if it is possible can you explain what was the issue that is causing this blocking behavior in detail do that I can get a much better understanding?
x264enc is gstreamer opensource plugin.“tune=4” means zero latency. it does not affect encoding quality.
if no “tune=4”, x264enc will not release the buffers as soon as possible. the upstream plugins will wait if buffers are not returned. you can add probe function on x264enc to verify.