Broken image returned by get_nvds_buf_surface

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
AGX
• DeepStream Version
6.0.1
• JetPack Version (valid for Jetson only)
L4T 32.7.1
• TensorRT Version
8.2.1.8
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Working under 5.0, but corrupt under 6.0.1
I can see correct images in EGLSink, but corrupt image in get_nvds_buf_surface.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

    n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
    frame_copy = np.array(n_frame, copy=True, order='C')
    frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)
    cv2.imwrite('a.jpg', frame_copy)

Can you share your pipeline? Can you helo to conform the video format of the buffer?

1 Like

Hi, thanks for your response.
This is my pipeline, all runs well in Deepstream5.0.
6.0.1 still can see the camera in osd, but corrupt image saved in python get_nvds_buf_surface

v4l2src(source=0,width=1600,height=1200,framerate=30) -> capsfilter(image/jpeg) -> videorate(max-rate=3) -> jpegparse -> jpegdec -> nvvideoconvert(src-crop=200:0:1200:1200) -> capsfilter(caps=video/x-raw(memory:NVMM),width=1200,height=1200) -> nvstreammux
v4l2src(source=1,width=1600,height=1200,framerate=30) -> capsfilter(image/jpeg) -> videorate(max-rate=3) -> jpegparse -> jpegdec -> nvvideoconvert(src-crop=200:0:1200:1200) -> capsfilter(caps=video/x-raw(memory:NVMM),width=1200,height=1200) -> nvstreammux
nvstreammux -> nvinfer -> nvvideoconvert -> capsfilter(caps=video/x-raw(memory:NVMM), format=RGBA) -> nvmultistreamtiler -> nvdsosd -> nvegltransform -> nveglglessink

How to confirm the video format of the buffer?

I tried to edit deepstream-imagedata-multistream, replacing uri with v4l2 and get Segmentation fault if set

	streammux.set_property('width', 1200)
	streammux.set_property('height', 1200)

Do you use probe to get gstbuffer? Where you add the probe?

1 Like

Yes, I add on caps_filter after nvinfer.

I found the Segmentation fault issue, when I set batch-size above 2, it will encounter Segmentation fault.
Even the sample v4l2 usb camera python apps will crash.

streammux.set_property('batch-size', 2)

Please fix it, or we will downgrade to DP5.0 which is no longer supported.

I got the image by calling n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id) twice finally.
BUT WHY, WHY, WHY need call twice???

Any update?

It isn’t reasonable if need call twice. Is it one random issue or timing issue?

1 Like

I tried time.sleep() with different seconds delay but it seems that not related to timing

Have any supported?

Any progress? do you fixed the issue?

How can I fix this? This is what me ask you. @kesong

We are having the same issue as @ag3hbk.

The first 3 extracted frames are distorted, after which they appear to be all normal. Within the inference step of our pipeline the model is still seeing the correct image (as we do see correct output in the metadata).

If we extract each frame twice, saving the second image, it gives out the correct frame.

If we extract each frame twice, saving the first image, it gives the incorrect output for the first frame and sometimes also the second frame.

For some of the distorted outputs it looks like the first few rows of pixels seem correct.

We do our extraction in a custom element and not a probe, so it does not seem to have anything to do with that.

Also we are not able to reproduce it on a discreet gpu, only on the nx.

We will try to make a minimal reproduction that we can share if that helps.
See probe_reproduction.py (1.4 KB)

Sample output of this code:
with (R, G, B,) = ((mean(red_channel), mean(green_channel), mean(blue channel))

(R, G, B): (0.0008304398148148148, 0.00041329089506172837, 0.0004243827160493827)
(R, G, B): (0.001535493827160494, 0.0010619212962962963, 0.0010315393518518518)
(R, G, B): (0.0022145061728395064, 0.0016314621913580247, 0.0017626350308641975)
(R, G, B): (0.002990451388888889, 0.002199074074074074, 0.0024937307098765434)
(R, G, B): (255.0, 0.0, 0.0)
(R, G, B): (255.0, 0.0, 0.0)
(R, G, B): (255.0, 0.0, 0.0)
(R, G, B): (255.0, 0.0, 0.0)
(R, G, B): (255.0, 0.0, 0.0)
(R, G, B): (255.0, 0.0, 0.0)

As you can see, the first 4 frames are not red frames while they should all be completely red.

Kind regards,
Dik

1 Like

Is it possible to figure out which plugin cause the image broken in your pipeline?

The nvidia official only asked you to figure out the problem, not help to resolve. I have waited for over 1 month.

Sometimes, the second call image is broken too.

After some more testing I think it either already breaks at the first nvvideoconvert, however internally the stream does have the uncorrupted streams, as I do get correct predictions from the model. So my guess is that is breaks at the extraction. Why it only breaks in the first few frames that are being extracted (note: these do not need to be the first frames, but can also be frames later in the stream. In my experience the first 2~3 extractions seem to fail)

I can’t debug it any further as I do not have access to any code nor do I have a firm grasp of C-like languages. I would expect that further investigation is done by NVIDIA… If there is anything that I can reasonably add I am happy to help.

@kesong Were you able to reproduce the error with the code I provided?

1 Like