Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson Nano • DeepStream Version
6.0.1 • JetPack Version (valid for Jetson only)
JetPack 4.6.3 (R32.7.5) • TensorRT Version
8.2 • NVIDIA GPU Driver Version (valid for GPU only)
N/A • Issue Type( questions, new requirements, bugs)
Bugs
**• How to reproduce the issue ? **
The issue arises when using the GStreamer pipeline to process live camera feeds with DeepStream components. The pipeline exhibits frame drops when the encoder is running at its default performance, and latency issues when encoding performance is limited to mitigate the frame drops.
This behavior is observed with both single-camera and dual-camera streams.
Pipeline Configuration:
Camera Sources:
Two cameras are connected via v4l2src (with device paths /dev/video0 and /dev/video1), capturing video streams from each camera.
Video Processing:
Each video stream goes through a capsfilter to set the desired video format (I420) and resolution (640x720).
The streams are then converted to NVMM memory format using nvvideoconvert for hardware-accelerated processing.
Stream Muxing:
The two streams are fed into nvstreammux, which batches them together for simultaneous processing. Dynamic pads (sink_0 and sink_1) are used to link the two camera streams to nvstreammux.
Inference (Object Detection):
The muxed stream is processed by nvinfer for object detection.
Post-Processing:
After inference, the processed streams are converted and displayed using nvvidconv and nvosd for additional formatting and overlay (e.g., bounding boxes).
The processed video streams are then tiled together using nvmultistreamtiler.
Encoding and Streaming:
The tiled video streams are encoded using nvv4l2h264enc and then parsed and multiplexed by h264parse and flvmux.
The final output is streamed to an RTMP server using rtmpsink.
Rendering:
The processed video is displayed locally via autovideosink.
Issue Encountered:
While running this pipeline, frame drops were observed during the streaming process, even with both one camera and two camera streams. The issue likely stems from the performance of the pipeline, either during video decoding, inference, or streaming, causing delays or frame loss.
Attempt to Resolve the Issue:
To address the frame drop problem, I attempted to limit the encoding performance in order to reduce the strain on the parser and mux elements, ensuring they could keep up with the video streams.
Changes Made:
Encoder Settings:
Adjusted the encoder bitrate to 3 Mbps (3000000), which is a moderate bitrate to balance between quality and performance.
Set preset-level to 2 to modify encoding speed, attempting to lower the encoding demand.
Disabled maxperf-enable to reduce the encoding workload.
Set input-buffers to 4 and num-output-buffers to 16 to manage the buffer allocation more effectively.
Effect of Changes:
These adjustments did not fully resolve the frame drop issue and instead led to significant frame loss, with a delay of approximately 1 to 2 seconds.
This suggests that while limiting the encoding performance reduced the load on some elements, it did not provide a sufficient solution to prevent frame drops in the overall pipeline.
After testing, I observed that the issue occurs somewhere after the nvosd element, likely during encoding, parsing, muxing, or sinking. When I bypassed the encoding and parsing steps and linked nvosd directly to the sink, the pipeline worked properly without any frame drops or delays. This indicates that the problem is related to the processing steps that follow nvosd.
1. What did you do to achieve maximum performance in your pipeline?
Yes, I referred to the document, but it mainly discusses system configuration. While useful, it mostly focuses on configuring text files. My pipeline directly uses a C implementation with a mix of DeepStream and GStreamer elements, bypassing text file configuration.
2. What did you do to make the pipeline save the FLV files while displaying the video via ‘autovideosink’? Did you use a “tee”?
No, I am not using a “tee.” Instead, I have two separate pipelines: one with flvmux and rtmpsink for saving the FLV files, and another with autovideosink purely for testing purposes.
3. What is your v4l2 camera’s original framerate?
The camera has a maximum framerate of 30 FPS.
4. What are the nvstreammux properties and settings?
The properties and settings for nvstreammux are as follows:
live-source: 1
batch-size: 2
width: 1280
height: 720
batched-push-timeout: 400000
5. What are the nvmultistreamtiler’s “width” and “height” properties?
The properties for nvmultistreamtiler are as follows:
Even after making changes, I’ve tried altering most of the parameters, but the issue persists. Even using the default configuration yields the same problem. I am linking the nvv4l2h264enc to the parser (h264parse) and then to the muxer (flvmux). Are you certain this setup won’t cause issues, such as the parser or muxer struggling to keep up with nvv4l2h264enc? I don’t believe the issue lies on the streammux side because when I use the same streammux for a display stream with nvdsosd to autovideosink, it streams without any issues. Based on this, I assume the problem is occurring somewhere after the encoding step.
the log is like this
0:00:08.128778585 7888 0x558d2a1a30 WARN v4l2bufferpool gstv4l2bufferpool.c:1087:gst_v4l2_buffer_pool_start:encoder:pool:src Uncertain or not enough buffers, enabling copy threshold
0:00:08.137089186 7888 0x558d2a1a30 WARN v4l2bufferpool gstv4l2bufferpool.c:790:gst_v4l2_buffer_pool_start:source3:pool:src Uncertain or not enough buffers, enabling copy threshold
0:00:08.172774211 7888 0x558d2a1a80 WARN v4l2bufferpool gstv4l2bufferpool.c:790:gst_v4l2_buffer_pool_start:source2:pool:src Uncertain or not enough buffers, enabling copy threshold
H264: Profile = 66, Level = 0
NVMEDIA_ENC: bBlitMode is set to TRUE
0:00:08.818078458 7888 0x7f2c009c00 WARN v4l2bufferpool gstv4l2bufferpool.c:1536:gst_v4l2_buffer_pool_dqbuf:encoder:pool:src Driver should never set v4l2_buffer.field to ANY
0:00:08.819237032 7888 0x558d2a18a0 FIXME basesink gstbasesink.c:3145:gst_base_sink_default_event: stream-start event without group-id. Consider implementing group-id handling in the upstream elements
0:00:09.591056307 7888 0x558d2a18a0 WARN flvmux gstflvmux.c:1082:gst_flv_mux_buffer_to_tag_internal:flvmux:sink_0 Got backwards dts! (0:00:00.566000000 < 0:00:00.666000000)
0:00:18.994649595 7888 0x558d2a1a30 WARN v4l2src gstv4l2src.c:976:gst_v4l2src_create: lost frames detected: count = 1 - ts: 0:00:10.826049966
0:00:19.630274865 7888 0x558d2a1a80 WARN v4l2src gstv4l2src.c:976:gst_v4l2src_create: lost frames detected: count = 2 - ts: 0:00:11.461630549
0:00:19.694673117 7888 0x558d2a1a30 WARN v4l2src gstv4l2src.c:976:gst_v4l2src_create: lost frames detected: count = 2 - ts: 0:00:11.526068497