Frame_meta.source_frame_width is recieving value as 0

**• Hardware Platform (Jetson / GPU)**nvidia geforce rtx 3060
• DeepStream Version sdk 7.1
• TensorRT Version 10.3
**• NVIDIA GPU Driver Version (valid for GPU only)**560.35.03

def osd_sink_pad_buffer_probe(pad, info, u_data):
# Get the GstBuffer
gst_buffer = info.get_buffer()
if not gst_buffer:
return Gst.PadProbeReturn.OK

# Get batch metadata from the buffer
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
if not batch_meta:
    return Gst.PadProbeReturn.OK

# Iterate through the frames in the batch
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
    try:
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        image_width = frame_meta.source_frame_width
    except StopIteration:
        break

Here frame_meta.source_frame_width is recieving value as 0. i have set streammux.set_property(‘width’, 1920)
streammux.set_property(‘height’, 1080) and calling this probe function after it. please let me know why this is happening .

Is osd_sink_pad_buffer_probe set on sink of nvosd?I suggest getting the values before nvmultistreamtiler because these values are meaningless after nvmultistreamtiler.

thank you i have used the probe function sink of the nvmultistreamtiler and its working perfectly.

Now i am trying to stitch the multiple streams since the cameras are adjacent to each other with minor overlapping but i have faced an issue when i am sending this stiched stream since the maximum resolution accepted by deepstream is lower than the stitched stream . how can i overcome this and use the multi streams as a single stitched stream.

  1. why do you want to stith the multiple streams? is there any benefit in your application?
  2. what are the resolution and fps of the multiple streams?
  3. could you elaborate on “i have faced an issue”? and what do you mean about “the maximum resolution accepted by deepstream is lower than the stitched stream”? could you share the reletated doc? If inputting a high resolution source, we usually use nvdspreprocess to set many ROIs, then let nvinfer do inference on the ROIs.

The topic is duplicated with How to add multi camera tracking with deepstream handling - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

i wanted to stitch the multiple frames for multi camera tracking since since my cameras are placed near by with slight overlapping.

i have tried stitching and send the streams as single input and faced the following error.

please refer to this topic for hardware decoding issue, Or could you set the codec of source to H265?

To narrow down this decoder issue, here are some questions.

  1. After stitching the frames, how did you encode the frame? what kind of tools or plugins? what is the output resoluton and codec type?
  2. If using H265 to encode the stream, will the decoder issue remain?
  3. Could you provide a 30 seconds of stream with the following method? Thanks!
#for h264
gst-launch-1.0 rtspsrc location=XXX ! rtph264depay ! h264parse ! 'video/x-h264,stream-format=byte-stream' ! filesink location=test.h264
#for h265
gst-launch-1.0 rtspsrc location=XXX ! rtph265depay ! h265parse ! 'video/x-h265,stream-format=byte-stream' ! filesink location=test.h265
#for unknown codec
ffmpeg -i rtsp://10.19.227.166/media/video1 -c copy 1.ts

I have used the uridecodebin element ,as you can see the screengrab i have uploaded previously the resolution is 3744x1088 with codec H264. The error occurs beacuse the resolution is exceeding the resolution supported by deepstream.

Thanks for the sharing! Here are two questions.

  1. Testing on my rtx 3060 with DS7.1. I can’t reproduce this hardware decoder issue. In terms of encoding-decoding, is there any difference with your test? could you use my method to reproduce the issue? Thanks!
gst-launch-1.0 -v videotestsrc num-buffers=150  ! video/x-raw,format=RGBA,width=3744,height=1088 !  nvvideoconvert ! 'video/x-raw(memory:NVMM),format=I420' ! nvv4l2h264enc bitrate=1000000   !  filesink location=test.h264   
gst-launch-1.0   uridecodebin uri=file:///home/test.h264 ! fakesink

encode-decode.txt (6.2 KB)
2. Did you use nvv4l2h264enc to encode stream? If using H265 to encode, will the hardware decoding issue remain? AYK, H265 is more suitable for high-definition video encoding.

i have tried with your pipeline and i am still recieving the same error . could you please check with my video.

  1. The codec of the shared video is MPEG-4 Part 2, not H264. please use H264 or h265 to encode the stream.
  2. If using h264 the same issue is still reported, could you share the log of the following cmd and “nvidia-smi”? Thanks!
gst-launch-1.0  -v  uridecodebin uri=file:///home/new.mp4  ! fakesink