Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) DGPU Nvidia RTX5000 • DeepStream Version 7.0 • JetPack Version (valid for Jetson only) • TensorRT Version TensorRT 8.6.1.6-1 Cuda 12.2 • NVIDIA GPU Driver Version (valid for GPU only) 535.183.01 • Issue Type( questions, new requirements, bugs) Questions
I have created a parallel inferencing pipeline using the Nvidia example here as a reference. The source for this pipeline is an MJPEG stream received from an IP camera. The pipeline works as expected when I use uridecodebin’s selected souphttpsrc element’s default configuration (only set the URI), but this causes the first 4-5 seconds of frames to be buffered and I only want the pipeline to process the latest frame. The uridecodebin’s generated souphttpsrc element has an is-live property that solves my frame buffering issue when used with a single inferencing pipeline, but setting this property on the parallel inferencing pipeline causes the pipeline to become stuck and only one frame seems to be processed. This is shown on the nveglglessink as a frozen output frame. Do you have any suggestions as to why this is happening in the pipeline below? If this could be a souphttpsrc sink issue, do you have any suggestions on how best to ingest an MJPEG stream with little to no buffering (live)?
Working Pipeline (but this buffers the stream for 4-5s, catches up to live and then runs live)
From your pipeline graph, the same “unique-id” value is set to the 4 gst-nvinfer, you’d better assign different “unique-id” to different nvinfer.
As your description, the pipeline takes 4~5 seconds to buffering frames before playback. HTTP is not a realtime transferring protocol, why do you choose http streaming to met your real-time requirement?
The souphttpsrc is open source GStreamer plugin, you may debug and tune by yourself.
@Fiona.Chen Good catch on the unique-id. That was just a copy-paste error.
I agree HTTP is not a RT protocol, but the requirement is to use a MJPEG stream here. Do you have any other suggestion on how to ingest a MJPEG stream?
I have a single inference version of the pipeline shown above and enabling “is-live” prevents the 4-5 second buffering. It really is buffering within the pipeline vs latency. Essentially, I see my pipeline runs at double the speed (60fps) of the camera (30fps) for that 4-5s as it catches up with the camera. Once the 4-5s elapses, the stream is shown at “Real time” on the pipeline’s eglglessink. The only difference between the working pipeline above and the non-working is the “is-live” parameter which adds a queue directly after the source element. I am now wondering if there is something outside of the souphttpsrc that requires a queue. Is there any requirement on the Tee element that would require a queue to properly source frames to all 4 inferencing paths?
I don’t think so. The multiqueue only make the upstream element and the downstream element to work in different threads. If the HTTP stream needs bufferring, the queue will not change anything.