Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) rtx 2080 ti • DeepStream Version 5 dp • JetPack Version (valid for Jetson only) • TensorRT Version 7 • NVIDIA GPU Driver Version (valid for GPU only) Driver Version: 440.64.00 CUDA Version: 10.2
which works and valid jpeg files are created.
But the image is distorted. Every line appear to be move futher across by 32 bytes.
The rtp stream that is also produced is not distorted.
I tried changing cv2.COLOR_RGBA2BGRA to cv2.COLOR_RGBA2BGR but did it not help.
The same problem. Loggin shows the numpy array is the right shape so the data backing it must be mapped incorrectly.
COLOR_RGBA2BGRA is used in the deepstream_imagedata-multistream example.
Hi,
Which place in the pipeline you call pyds.get_nvds_buf_surface()? Looks like you get the buffer in RGBA blocklinear format. The buffer has to be pitchlinear in OpenCV.
I have tried connecting at two points
the sink of nvstreamdemux (named demux)
the sink of nvdsosd (named taphere)
The only difference was getting batches in nvstreamdemux
The code is
taphere = self.pipeline.get_by_name('taphere')
#taphere = self.pipeline.get_by_name('demux')
osdsinkpad = taphere.get_static_pad("sink")
if not osdsinkpad:
sys.stderr.write(" Unable to get sink pad of nvosd \n")
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, self.osd_sink_pad_buffer_probe, self.stats)
Hi,
Could you try deepstream-imagedata-multistream? The code of hooking with OpenCV is from the sample. would like to know if it works by applying cv2.imwrite() to it.
To see if the problem was the motion jpeg stream (mjpeg) or something else I modified the deepstream_imagedata-multistream.py example to process the mjpeg stream.
After adjusting the confidence levels it wrote out a few images. There was no distortion.
That means the problem lies somewhere in what I’m doing differently - I’ll set about creating a minimal example that produces the problem. One major difference is I’m just classifying the whole image and have a custom parser.
without the crop no distortion in the captured image
with the crop I get the distortion
Does this help isolate the problem or do you still need the complete example.
(Which I’m trying to do using your example stream data and a fictive model)
Hi,
Thanks for the information. It looks like we can apply a simple patch to deepstream_imagedata-multistream for reproducing the issue. Would be great if you can help on this.
Just a guess, I think the problem is here: frame_image=np.array(n_frame,copy=True,order='C')
I’m not sure why this even works in the official sample app, if n_frame is a flat buffer, because opencv / numpy has no way to know the image shape.
But it makes sense that src-crop has something to do with the problem, because it changes the shape, and the buffer mapping from flat to 3d is now different.
A second issue related to the same stream. If I use nvjpegdec instead of jpegdec there is an impressive memory leak which crashes the process after about 30 minutes.
I’m also experiencing image banding issue when accessing image data through NvBufSurfce, looks very similar to what is posted here.
In my case, issue seems to be limited only to the image that I get through pyds.get_nvds_buf_surface. Detector model output produces correct bounding boxes, and if I set something like filesink at the end of the pipeline, resulting video is fine, no banding.
Additionally, as far as I can see, the issue manifests depending on combination of video source / nvstreammux resolutions.