• Hardware Platform (Jetson / GPU): RTX 3070
• DeepStream Version: 5.1 (nvcr.io/nvidia/deepstream:5.1-21.02-triton)
• TensorRT Version: 7.2.1-1+cuda11.1
• NVIDIA GPU Driver Version (valid for GPU only): 470.57.02
• Issue Type( questions, new requirements, bugs): bugs
I’m having an issue with the
nvstreamdemux plugin where it’s taking in a buffer (batch size 1) with a
NvDsBatchMeta struct which contains user metadata of type
NVDSINFER_TENSOR_OUTPUT_META (raw results from the
nvinferserver plugin) and outputting a buffer with a
NvDsMeta struct of type
NVDS_BATCH_GST_META which, after conversion into the
NvDsBatchMeta struct, does NOT contain the inference results in the user metadata list. I tried passing in my own custom metadata as well through the
nvstreamdemux along with the inference results meta and in that case both are present on the plugin’s src pad, but only my custom metadata is present on the sink pad (I’ve attached probes to check this info). Does anyone know what could cause this kind of behavior?
Illustration of the problem:
--- gstbuffer (infer meta + custom meta) ---> | nvstreamdemux | --- gstbuffer (custom meta) --->
Can the deepstream-infer-tensor-meta-test sample work in your platform?
Please investigate deepstream-infer-tensor-meta-test code carefully.
There is no batch in nvstreammux sink pad. nvstreammux will generate the original batch_meta. Please don’t add or change any batch meta before nvstreammux. Gst-nvstreammux — DeepStream 5.1 Release documentation
Can you show the complete pipeline? The easiest way is to dump the gstreamer pipeline graph. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums
Hi, I’m talking about
nvstreammux. I’m using the demuxer since I was getting out-of-order frames on the output of the
nvinferserver plugin and was told that I needed to use the
nvstreamdemux plugin in order to reorder the frames correctly.
My pipeline currently looks something like this:
rtspsrc -> rtph264depay -> h264parse -> nvv4l2decoder -> nvvideoconvert -> nvstreammux -> nvinferserver -> nvstreamdemux -> nvvideoconvert -> myplugin
I went through the
deepstream-infer-tensor-meta-test sample but it did not help me much with this problem since it doesn’t contain the demuxer plugin.
The nvstreamdemux will delete batch meta. So there is no batch meta available after nvstreamdemux.
As to NVDSINFER_TENSOR_OUTPUT_META, it is designed for extracting the tensor output with nvinfer, so the output is only available in the nvinfer src pad. After that you can not find it.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.