I have tried connecting at two points
the sink of nvstreamdemux (named demux)
the sink of nvdsosd (named taphere)
The only difference was getting batches in nvstreamdemux
The code is
taphere = self.pipeline.get_by_name('taphere')
#taphere = self.pipeline.get_by_name('demux')
osdsinkpad = taphere.get_static_pad("sink")
if not osdsinkpad:
sys.stderr.write(" Unable to get sink pad of nvosd \n")
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, self.osd_sink_pad_buffer_probe, self.stats)
Hi,
Could you try deepstream-imagedata-multistream? The code of hooking with OpenCV is from the sample. would like to know if it works by applying cv2.imwrite() to it.
To see if the problem was the motion jpeg stream (mjpeg) or something else I modified the deepstream_imagedata-multistream.py example to process the mjpeg stream.
After adjusting the confidence levels it wrote out a few images. There was no distortion.
That means the problem lies somewhere in what I’m doing differently - I’ll set about creating a minimal example that produces the problem. One major difference is I’m just classifying the whole image and have a custom parser.
without the crop no distortion in the captured image
with the crop I get the distortion
Does this help isolate the problem or do you still need the complete example.
(Which I’m trying to do using your example stream data and a fictive model)
Hi,
Thanks for the information. It looks like we can apply a simple patch to deepstream_imagedata-multistream for reproducing the issue. Would be great if you can help on this.
Just a guess, I think the problem is here: frame_image=np.array(n_frame,copy=True,order='C')
I’m not sure why this even works in the official sample app, if n_frame is a flat buffer, because opencv / numpy has no way to know the image shape.
But it makes sense that src-crop has something to do with the problem, because it changes the shape, and the buffer mapping from flat to 3d is now different.
A second issue related to the same stream. If I use nvjpegdec instead of jpegdec there is an impressive memory leak which crashes the process after about 30 minutes.
I’m also experiencing image banding issue when accessing image data through NvBufSurfce, looks very similar to what is posted here.
In my case, issue seems to be limited only to the image that I get through pyds.get_nvds_buf_surface. Detector model output produces correct bounding boxes, and if I set something like filesink at the end of the pipeline, resulting video is fine, no banding.
Additionally, as far as I can see, the issue manifests depending on combination of video source / nvstreammux resolutions.
I have tested with the GA version and the problem has been solved.
Testing took a little longer than anticipated while other updates forced the change to dynamic batch size (pytorch->onnx->trtexec->deepstream)
HI i am using the same code on python deepstream test1 but n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer),frame_meta.batch_id) is giving segementation fault
I am using deepstream 5.0 on GPU-tesla k40m with cuda 10.2 and driver 440.100