Collecting images with pyds.get_nvds_buf_surface

Hi,

I have tried connecting at two points
the sink of nvstreamdemux (named demux)
the sink of nvdsosd (named taphere)
The only difference was getting batches in nvstreamdemux

The code is

    taphere = self.pipeline.get_by_name('taphere')
    #taphere = self.pipeline.get_by_name('demux')
    osdsinkpad = taphere.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, self.osd_sink_pad_buffer_probe, self.stats)

Regards

Hi,
Could you try deepstream-imagedata-multistream? The code of hooking with OpenCV is from the sample. would like to know if it works by applying cv2.imwrite() to it.

Hi,

That worked perfectly and the images are not damaged

python3 deepstream_imagedata-multistream.py file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 /home/david/frames

david@980da252eebb:~$ find frames/
frames/
frames/stream_0
frames/stream_0/frame_990.jpg
frames/stream_0/frame_1410.jpg
frames/stream_0/frame_630.jpg
frames/stream_0/frame_900.jpg
frames/stream_0/frame_1440.jpg
frames/stream_0/frame_180.jpg
frames/stream_0/frame_150.jpg
frames/stream_0/frame_540.jpg
frames/stream_0/frame_120.jpg
david@980da252eebb:~$

Could the problem be due to me using a motion jpeg stream (which I have no choice over)?

Both are running in the same docker container based on nvcr.io/nvidia/deepstream:5.0-dp-20.04-devel

Hi,
Please share us a test sample so that we can reproduce it and do further check.

To see if the problem was the motion jpeg stream (mjpeg) or something else I modified the deepstream_imagedata-multistream.py example to process the mjpeg stream.

After adjusting the confidence levels it wrote out a few images. There was no distortion.

That means the problem lies somewhere in what I’m doing differently - I’ll set about creating a minimal example that produces the problem. One major difference is I’m just classifying the whole image and have a custom parser.

Hi,

While building the example I have been able to trace the problem down to one line of code

nvvidconv0.set_property(‘src-crop’,‘210:0:360:360’)

without the crop no distortion in the captured image
with the crop I get the distortion

Does this help isolate the problem or do you still need the complete example.
(Which I’m trying to do using your example stream data and a fictive model)

Regards

Hi,
Thanks for the information. It looks like we can apply a simple patch to deepstream_imagedata-multistream for reproducing the issue. Would be great if you can help on this.

Just a guess, I think the problem is here: frame_image=np.array(n_frame,copy=True,order='C')

I’m not sure why this even works in the official sample app, if n_frame is a flat buffer, because opencv / numpy has no way to know the image shape.

But it makes sense that src-crop has something to do with the problem, because it changes the shape, and the buffer mapping from flat to 3d is now different.

The example code

I have an example based on deepstream_imagedata-multistream that shows the problem.

First convert the mp4 to a mjpeg using convert.sh
Then run
python3 deepstream_imagedata-test.py test.mjpeg frames

An example image

A second issue related to the same stream. If I use nvjpegdec instead of jpegdec there is an impressive memory leak which crashes the process after about 30 minutes.

Hi,

For decoding MJPEG, you can use nvv4l2decoder mjpeg=1.

Thanks, that worked (but without the “mjpeg=1” parameter). The distorted image with get_nvds_buf_surface remains.

Hi,
We can observe the issue with your test sample. It is under investigation. Will update.

I’m also experiencing image banding issue when accessing image data through NvBufSurfce, looks very similar to what is posted here.

In my case, issue seems to be limited only to the image that I get through pyds.get_nvds_buf_surface. Detector model output produces correct bounding boxes, and if I set something like filesink at the end of the pipeline, resulting video is fine, no banding.

Additionally, as far as I can see, the issue manifests depending on combination of video source / nvstreammux resolutions.

Hi,

Please upgrade to 5.0 GA and try again.

I have tested with the GA version and the problem has been solved.
Testing took a little longer than anticipated while other updates forced the change to dynamic batch size (pytorch->onnx->trtexec->deepstream)

Thanks

David

HI i am using the same code on python deepstream test1 but n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer),frame_meta.batch_id) is giving segementation fault

I am using deepstream 5.0 on GPU-tesla k40m with cuda 10.2 and driver 440.100

Hi bhatiyaarpit95,

Please help to open a new topic with more details for your issue. Thanks