'streaming stopped, reason not-negotiated (-4)' with h264parse when using tee element

**• Hardware Platform (Jetson / GPU) Jetson
**• DeepStream Version 6.0
**• JetPack Version (valid for Jetson only) 4.6.3
*• TensorRT Version * 8.2.1.32
• NVIDIA GPU Driver Version (valid for GPU only)
*• Issue Type( questions, new requirements, bugs) * question
**• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) **
**• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) GLib, Gst

Hello, I’m trying to create a pipeline which execute an object dectector, but I want to be able to deactivate it when I want and keep the streaming video working, but without the object detector.
My idea was to use a tee element to duplicate the streaming flow and then link each flow to a valve element that I activate and deactivate with an Observer Pattern class that I access through another thread and send both flows by udpsink to the same port, so I can see the result in the same window.

I’m creating the pipeline using Python, but the pipeline structure should be something like this.

gst-launch-1.0 -e
filesrc location=sample_720p.h264 ! h264parse ! nvv4l2decoder ! nvstreammux ! tee name=t \

                   t. ! valve name=valve_1 drop=0 ! nvinfer ! nvvideoconvert! nvdsosd ! 'video/x-raw(memory:NVMM), format=I420' ! nvv4l2h264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000 \

                   t. ! valve name=valve_2 drop=1 ! 'video/x-raw(memory:NVMM), format=I420' ! nvv4l2h264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000

However, I keep getting the same error no matter what I try

Error: gst-stream-error-quark: Internal data stream error. (1): gstbaseparse.c(3611): gst_base_parse_loop (): /GstPipeline:pipeline0/GstH264Parse:h264-parser:

And I don’t know how to adress it. My code is based on deepstream_test1_rtsp_out.py (I’m working via SSH and remote Desktop) and that works fine for me, but I didn’t modify any part related to the h264parse element in my code.

Running it in debug mode 3, I get a lot of errors telling

 v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x2d30bc0 Failed to determine interlace mode
v4l2 gstv4l2object.c:4476:gst_v4l2_object_probe_caps:<encoder_siam:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1

and this one with red mark

GST_PADS gstpad.c:4226:gst_pad_peer_query:<nvv4l2-decoder:src> could not send sticky events

The part I added to the code is quite simple, I just create a tee element and asign each src to a Valve. One is connected to the same flow as the deepstream_test1_rtsp_out.py and the other is conected to a similar flow, but without the nvinfer and the nvosd elements.

Aside from creating the new elements and set their properties, the only code I add to modify the pipeline is the linking process:

source.link(h264parser)
h264parser.link(decoder)
sinkpad = streammux.get_request_pad("sink_0")
if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
  
srcpad = decoder.get_static_pad("src")
if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")

srcpad.link(sinkpad)
streammux.link(tee)
sink_pad = self.valve_object_detector.get_static_pad("sink")
tee_object_detector=tee.get_request_pad("src_0")

if not tee_object_detector :
        sys.stderr.write("Unable to get tee_object request pads\n")
tee_object_detector.link(sink_pad)

tee_deact=tee.get_request_pad("src_1")
if not tee_deact :
        sys.stderr.write("Unable to get tee_siam request pads\n")
sink_pad = self.valve_deactivate.get_static_pad("sink")
tee_deact.link(sink_pad)
  
self.valve_object_detector.link(pgie)
pgie.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(nvvidconv_postosd)
nvvidconv_postosd.link(caps)
caps.link(encoder)
encoder.link(rtppay)
rtppay.link(sink)

self.valve_deactivate.link(caps_deactivate)
caps_deactivate.link(encoder_deactivate)
encoder_deactivate.link(rtppay_deactivate)
rtppay_deactivate.link(sink_deactivate)

Can anyone assist me with this issue?

PD:
I also get this errors related to the RSTP server. They are not my main concern at the moment, I haven’t checked yet why that happens, but I anyone knows a solution, I would apreciate it.

rtspserver rtsp-server.c:929:gst_rtsp_server_create_socket:<GstRTSPServer@0x3241f70> failed to create socket
 rtspserver rtsp-server.c:1342:gst_rtsp_server_attach:<GstRTSPServer@0x3241f70> failed to create watch: Error binding to address: Address already in use

Is the h264parse plugin working properly in the demo deepstream_test1_rtsp_out.py without your changes?

Yes. It works fine. No problem either with the nvinfer or nvosd element.

I have checked the debug output from deepstream_test1_rtsp_out.py and I get the same warning and errors. Even for the RRSPserver.
The only diference with my code is at the end.
In my code I get

v4l2 gstv4l2object.c:4476:gst_v4l2_object_probe_caps:<nvv4l2-decoder:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x272d3230 Failed to determine interlace mode
v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x272d3230 Failed to determine interlace mode
v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x272d3230 Failed to determine interlace mode
GST_PADS gstpad.c:4226:gst_pad_peer_query:<nvv4l2-decoder:src> could not send sticky events
v4l2videodec gstv4l2videodec.c:1755:gst_v4l2_video_dec_decide_allocation:<nvv4l2-decoder> Duration invalid, not setting latency
v4l2bufferpool gstv4l2bufferpool.c:1087:gst_v4l2_buffer_pool_start:<nvv4l2-decoder:pool:src> Uncertain or not enough buffers, enabling copy threshold
v4l2bufferpool gstv4l2bufferpool.c:1536:gst_v4l2_buffer_pool_dqbuf:<nvv4l2-decoder:pool:src> Driver should never set v4l2_buffer.field to ANY
baseparse gstbaseparse.c:3611:gst_base_parse_loop:<h264-parser> error: Internal data stream error.
baseparse gstbaseparse.c:3611:gst_base_parse_loop:<h264-parser> error: streaming stopped, reason not-negotiated (-4)

and in deepstream_test1_rtsp_out.py I get:

v4l2 gstv4l2object.c:4476:gst_v4l2_object_probe_caps:<nvv4l2-decoder:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x3d788790 Failed to determine interlace mode
v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x3d788790 Failed to determine interlace mode
v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x3d788790 Failed to determine interlace mode
v4l2bufferpool gstv4l2bufferpool.c:1087:gst_v4l2_buffer_pool_start:<encoder:pool:src> Uncertain or not enough buffers, enabling copy threshold
v4l2videodec gstv4l2videodec.c:1755:gst_v4l2_video_dec_decide_allocation:<nvv4l2-decoder> Duration invalid, not setting latency
v4l2bufferpool gstv4l2bufferpool.c:1087:gst_v4l2_buffer_pool_start:<nvv4l2-decoder:pool:src> Uncertain or not enough buffers, enabling copy threshold
v4l2bufferpool gstv4l2bufferpool.c:1536:gst_v4l2_buffer_pool_dqbuf:<nvv4l2-decoder:pool:src> Driver should never set v4l2_buffer.field to ANY

v4l2bufferpool gstv4l2bufferpool.c:1536:gst_v4l2_buffer_pool_dqbuf:<encoder:pool:src> Driver should never set v4l2_buffer.field to ANY

The only diference is the line I previously said, the one with red mark

GST_PADS gstpad.c:4226:gst_pad_peer_query:<nvv4l2-decoder:src> could not send sticky events

and two lines about h264parse element error

baseparse gstbaseparse.c:3611:gst_base_parse_loop:<h264-parser> error: Internal data stream error.
baseparse gstbaseparse.c:3611:gst_base_parse_loop:<h264-parser> error: streaming stopped, reason not-ne

OK. Could you attach the whole python code after you modified the demo code? I can run in my enviroment and check that.

Deactivate_SiamTracker_ObjectDetector_Pipeline.py (17.7 KB)
yolov7.cfg (10.7 KB)
config_infer_primary_yoloV7.txt (845 Bytes)
yolov7.wts
model_b4_gpu0_fp16.engine (98.5 MB)

I have also uploaded the files I used for object detector, I just the Yolov7 pretrained zoo model. Yolov7.wts is too heavy, so I uploaded to Google Drive.

The h264 file I used is the Nvidia sample located in /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264

I have the same problem during debugging on my end. Could you first use the basic plugin of gstreamer to run the scene of tee+valve plugin? We have not used valve plugin before and are not sure if it is effective in this way.
We have a demo for your needs but it’s a c/c++ demo. You can refer to that too.
https://github.com/NVIDIA-AI-IOT/deepstream_parallel_inference_app

I’m not sure what you mean with the basic plugin of gstreamer.
I tried using nvdec_h264 instead of nvv4l2decoder, but I get the same error.
I also tried adding just a valve element to stop and release the flow. It kind of works, I don’t get any h264parse error, but once I close the valve, setting its drop property to TRUE, the flow can’t resume when I open it, setting its drop property to FALSE .
Gstreamer_Deactivate_SiamTracker_ObjectDetector_Pipeline.py (17.5 KB)

I also tried the same, but with a tee element with one src and I get the same as using one valve element .

I mean you can try to use the origin gstreamer plugin first without nv gstreamer plugin. Like a simple pipeline: src->h264parse->avdec_x264->tee->valve->… After the pipeline running properly, you can try to add nv plugins in the pipeline. Then we can better help locate the problem.
This issue may simply be related to the usage of valve plugin.

While I was stuck in this, I tried a different approach using dinamic pipelines. I got stuck too, but the result seems more stable. I made another post in case it might help someone.

I still want to make this work, the valve element seems more suitable for what I’m trying to achive. I found this other post with a similar problem and they solved it by adding the parser after the tee element.

I will post any progress I make.

I tried what this post said and it seems to kind of work.
Changing the pipeline to something like this make the previous error dissapear.

gst-launch-1.0 -e
filesrc location=sample_720p.h264 ! tee name=t \

              t. ! h264parse ! nvv4l2decoder ! nvstreammux ! valve name=valve_1 drop=0 ! nvinfer ! nvvideoconvert! nvdsosd ! 'video/x-raw(memory:NVMM), format=I420' ! nvv4l2h264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000 \

              t. ! h264parse ! nvv4l2decoder ! nvstreammux ! valve name=valve_2 drop=1 ! 'video/x-raw(memory:NVMM), format=I420' ! nvv4l2h264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000

However I get the same problem I menitoned early. Once the valve is closed, the data stream won’t flow even if I open it again.

Could you try to set the drop of the two valve plugins both to 0 at first? You can use a simple pipeline to tune at first like below:

gst-launch-1.0 -e
filesrc location=sample_720p.h264 ! tee name=t \
              t. ! h264parse ! nvv4l2decoder ! nvstreammux ! valve name=valve_1 drop=0 ! nvinfer ! nvvideoconvert! nvdsosd ! nv3dsink sync=1 \
              t. ! h264parse ! nvv4l2decoder ! nvvideoconvert ! valve name=valve_2 drop=0 ! nv3dsink sync=1

Now that we solved the problem with the decoder, I tried a pipeline with just one valve.

filesrc location=sample_720p.h264 ! tee name=t
t. ! h264parse ! nvv4l2decoder ! nvstreammux ! valve name=valve drop=0 ! nvinfer ! nvvideoconvert! nvdsosd ! ! ‘video/x-raw(memory:NVMM), format=I420’ ! nvv4l2h264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000

But I get the same problem as before with the valve element, once I close it and open it again the stream flow doesn’t continue. I have read many post that suggest to set the async property of the sink element to False, but that doesn’t work with me. It may be because of udpsink. I run the program on a headless server and send the processed data to a remote desktop with X11 to show it with xvimagesink.

gst-launch-1.0 -v udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! avdec_h264 ! xvimagesink sync=0

I tested it and while the valve is open it doesn’t show any frame, however when I close the valve It show one frame. Probably the last it processed, but without any boxes from the inference.
When I can, I will try the same pipeline with a filesink element.

Glad to hear that. Could you attach your code with decoder problem resolved? We can help to check it too. Or you can first use the simple pipeline like : filesrc->h264parse->valve → rtph264pay-> sink to tune the valve plugin.

Sure.
Deactivate_SiamTracker_ObjectDetector_Pipeline.py (16.6 KB)
I’m not sure if the code will work, because I have testing the valve element, but at least the structure doesn’t throw the same errors.

The problem seems to bee the combination with h264parse + tee. Acording to this [post] (GStreamer branch manipulation - Stack Overflow) :

Regarding the link error, for some reason when linking tee → branch, the Caps of the tee source (found using “gst_element_get_compatible_pad()”) were not the same as the source of the preceding element (the parser). As a result, when the branch sink wanted caps and queried the tee, which queried the element behind, the branch sink got the same caps as the parser source, which was incompatible with the tee source. This was fixed (in my case) by putting the parser AFTER the tee (on the branch) rather than before it. I do not know the cause of this, but it works now.

So, you could fix it making a caps between h264parse element and the tee element or, as the post suggest, placing the h264parse after the tee element. I did the last one.

About the topic of this post, the same but using dynamic pads.

I found out what was happening with the source element.
I didn’t added to the pipeline neither linked it to the tee element.

So, I remade the code following this github sample and now the source element is in ACTIVE state, but I have the same problem with the valve element. The data don’t flow throught the pipeline.
It seems the bin containers are well attached, but no matter what I try I don’t get any video and then, the probe functions won’t activate.

If anyone can guide I would really apreciate it. If you need more info, just ask.
Dinamic.py (15.3 KB)

OK. This topic just for h264parse problem and glad to know that it’s working well now. Let’s talk the valve problem on the post topic.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.