Collecting images with pyds.get_nvds_buf_surface

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) rtx 2080 ti
• DeepStream Version 5 dp
• JetPack Version (valid for Jetson only)
• TensorRT Version 7
• NVIDIA GPU Driver Version (valid for GPU only) Driver Version: 440.64.00 CUDA Version: 10.2

I’m collecing the image with
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
frame_image = np.array(n_frame, copy=True, order=‘C’)
frame_image = cv2.cvtColor(frame_image, cv2.COLOR_RGBA2BGRA)
cv2.imwrite(“frame_” + str(frame_meta.frame_num) + “.jpg”, frame_image)

which works and valid jpeg files are created.
But the image is distorted. Every line appear to be move futher across by 32 bytes.
The rtp stream that is also produced is not distorted.

The gstreamer pipeline
nvstreammux name=mux batch-size={3} width=224 height=224 live-source=0 nvbuf-memory-type=3 batched-push-timeout={4} !
nvinfer config-file-path= …/config/onnx_pgie.txt batch-size={3} model-engine-file= …/…/deepstream_data/resnet50_15.onnx_b4_gpu0_fp32.engine ! nvstreamdemux name=demux
souphttpsrc location=http://{0}:{1}/?action=stream ! multipartdemux ! image/jpeg,width=640,height=360 ! jpegdec !
nvvideoconvert src-crop=210:0:360:360 nvbuf-memory-type=3 ! video/x-raw(memory:NVMM),width=(int)224,height=(int)224,format=RGBA,pixel-aspect-ratio=1/1 ! queue ! mux.sink_0
demux.src_0 ! nvvideoconvert nvbuf-memory-type=3 ! video/x-raw(memory:NVMM), format=RGBA ! nvdsosd name=taphere !
nvvideoconvert ! jpegenc ! jpegparse ! rtpjpegpay ! rtpstreampay ! tcpserversink host=0.0.0.0 port={2} blocksize=512000 buffers-max=200

frame_1315

Hi,
Please try

frame_image = cv2.cvtColor(frame_image, cv2.COLOR_RGBA2BGR)

Not sure if BGRA is supported, but it should work to give BGR buffers in calling cv2.imwrite()

Hi,

I tried changing cv2.COLOR_RGBA2BGRA to cv2.COLOR_RGBA2BGR but did it not help.
The same problem. Loggin shows the numpy array is the right shape so the data backing it must be mapped incorrectly.

COLOR_RGBA2BGRA is used in the deepstream_imagedata-multistream example.

Thanks

Hi,
Which place in the pipeline you call pyds.get_nvds_buf_surface()? Looks like you get the buffer in RGBA blocklinear format. The buffer has to be pitchlinear in OpenCV.

Hi,

I have tried connecting at two points
the sink of nvstreamdemux (named demux)
the sink of nvdsosd (named taphere)
The only difference was getting batches in nvstreamdemux

The code is

    taphere = self.pipeline.get_by_name('taphere')
    #taphere = self.pipeline.get_by_name('demux')
    osdsinkpad = taphere.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, self.osd_sink_pad_buffer_probe, self.stats)

Regards

Hi,
Could you try deepstream-imagedata-multistream? The code of hooking with OpenCV is from the sample. would like to know if it works by applying cv2.imwrite() to it.

Hi,

That worked perfectly and the images are not damaged

python3 deepstream_imagedata-multistream.py file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 /home/david/frames

david@980da252eebb:~$ find frames/
frames/
frames/stream_0
frames/stream_0/frame_990.jpg
frames/stream_0/frame_1410.jpg
frames/stream_0/frame_630.jpg
frames/stream_0/frame_900.jpg
frames/stream_0/frame_1440.jpg
frames/stream_0/frame_180.jpg
frames/stream_0/frame_150.jpg
frames/stream_0/frame_540.jpg
frames/stream_0/frame_120.jpg
david@980da252eebb:~$

Could the problem be due to me using a motion jpeg stream (which I have no choice over)?

Both are running in the same docker container based on nvcr.io/nvidia/deepstream:5.0-dp-20.04-devel

Hi,
Please share us a test sample so that we can reproduce it and do further check.

To see if the problem was the motion jpeg stream (mjpeg) or something else I modified the deepstream_imagedata-multistream.py example to process the mjpeg stream.

After adjusting the confidence levels it wrote out a few images. There was no distortion.

That means the problem lies somewhere in what I’m doing differently - I’ll set about creating a minimal example that produces the problem. One major difference is I’m just classifying the whole image and have a custom parser.

Hi,

While building the example I have been able to trace the problem down to one line of code

nvvidconv0.set_property(‘src-crop’,‘210:0:360:360’)

without the crop no distortion in the captured image
with the crop I get the distortion

Does this help isolate the problem or do you still need the complete example.
(Which I’m trying to do using your example stream data and a fictive model)

Regards

Hi,
Thanks for the information. It looks like we can apply a simple patch to deepstream_imagedata-multistream for reproducing the issue. Would be great if you can help on this.

Just a guess, I think the problem is here: frame_image=np.array(n_frame,copy=True,order='C')

I’m not sure why this even works in the official sample app, if n_frame is a flat buffer, because opencv / numpy has no way to know the image shape.

But it makes sense that src-crop has something to do with the problem, because it changes the shape, and the buffer mapping from flat to 3d is now different.

The example code

I have an example based on deepstream_imagedata-multistream that shows the problem.

First convert the mp4 to a mjpeg using convert.sh
Then run
python3 deepstream_imagedata-test.py test.mjpeg frames

An example image

A second issue related to the same stream. If I use nvjpegdec instead of jpegdec there is an impressive memory leak which crashes the process after about 30 minutes.

Hi,

For decoding MJPEG, you can use nvv4l2decoder mjpeg=1.

Thanks, that worked (but without the “mjpeg=1” parameter). The distorted image with get_nvds_buf_surface remains.

Hi,
We can observe the issue with your test sample. It is under investigation. Will update.

I’m also experiencing image banding issue when accessing image data through NvBufSurfce, looks very similar to what is posted here.

In my case, issue seems to be limited only to the image that I get through pyds.get_nvds_buf_surface. Detector model output produces correct bounding boxes, and if I set something like filesink at the end of the pipeline, resulting video is fine, no banding.

Additionally, as far as I can see, the issue manifests depending on combination of video source / nvstreammux resolutions.

Hi,

Please upgrade to 5.0 GA and try again.