Deepstream Python Bindings cannot access frame from deepstream-test-app3

Following are my hardware configurations:

Jetson Nano
DeepStream 5.0
JetPack Version 4.4
TensorRT Version 7.1.3.0
CUDA 10.2

Using deepstream-test-app3 in python I am able to run inference on RTSP feed. But when I try to convert the frames from Gst Buffer to numpy array using the following lines:

n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer),frame_meta.batch_id)
frame_image=np.array(n_frame,copy=True,order=‘C’)
frame=cv2.cvtColor(frame_image,cv2.COLOR_RGBA2BGRA)

I get the following error:

RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format

I do not get this error with usb camera. I get this error when I run deepstream-test-app3 on a file or rtsp feed.

1 Like

Can deepstream-imagedata-multistream in python work in your platform? You need to set capsfilter before tiler and after nvvideoconvert to convert the video format to RGBA, then you can get RGBA data in probe.

Yes imagedata-multistream worked. And I did discover that nvconv and capsfilter are needed to convert.

But the greatest issue I am facing right now is that imagedata-multistream is very slow on my platform which is due to the conversion function from gst_buffer to numpy array. and I really need the frames to perform further image processing operations, if not in real time then atleast at 1fps.

Can you add “sink.set_property(“sync”,0)” in the app and try? Currently deepstream-test-app3 is using nveglglessink, it is better to replace it with fakesink to test performance.

I have already set sink “sync” to 0. So I added nvvideoconvert and capsfilter to the deepstream-test-app3. my pipeline looks like this →
source_bin → streammux → pgie → tracker → nvvidconv → capsfilter → tiler → nvvidconv → nvosd → sink
There is no issue when I simply want to do inference. The issue comes when converting the gst_buffer to numpy array. then the whole inference slows down in Jetson Nano. i.e. when I comment out the 3 conversion lines it runs at its normal speed but slows down when I uncomment these lines.

n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer),frame_meta.batch_id)
frame_image=np.array(n_frame,copy=True,order=‘C’)
frame=cv2.cvtColor(frame_image,cv2.COLOR_RGBA2BGRA)

Will you transfer the numpy array to jpeg image file? We will check the performance with Nano.

Yes. I am using opencv to write the numpy array to image. It is fine if I want to process @1fps. But for real time processing it is not feasible.

We have tested with Nano. The current deepstream-imagedata-multistream app without any OpenCV processing can run with an average of 7 frames per second.

How about using pillow instead of cv2? It is much lighter and probably faster than cv2:

I am using it like this:
from PIL import mage

Image.fromarray(n_frame[:,:,:3]).save(‘file_name.jpg’)

1 Like

hey, @Fiona.Chen @preronamajumder my current pipeline in deepstream app 3 is

streammux.link(queue1)
queue1.link(queue2)
##pgie.link(queue2)
queue2.link(tiler)
tiler.link(queue3)
queue3.link(nvvidconv)
nvvidconv.link(queue4)
queue4.link(nvosd)
if is_aarch64():
    nvosd.link(queue5)
    queue5.link(transform)
    transform.link(sink)
else:
    nvosd.link(queue5)
    queue5.link(sink)  

where am i suppose to add caps because i am having the same error
thanks

You need to use the same pipeline as in imagedata-multistream after pgie.
Element initialisations and pipeline should be as follows:

nvvidconv1 = Gst.ElementFactory.make("nvvideoconvert", "convertor1")
if not nvvidconv1:
    sys.stderr.write(" Unable to create nvvidconv1 \n")
print("Creating filter1 \n ")
caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
filter1 = Gst.ElementFactory.make("capsfilter", "filter1")
if not filter1:
    sys.stderr.write(" Unable to get the caps filter1 \n")
filter1.set_property("caps", caps1)

streammux.link(queue1)
queue1.link(pgie)
pgie.link(queue2)
queue2.link(nvvidconv1)
nvvidconv1.link(queue3)
queue3.link(filter1)
filter1.link(queue4)
queue4.link(tiler)
tiler.link(queue5)
queue5.link(nvvidconv)
nvvidconv.link(queue6)
queue6.link(nvosd)
if is_aarch64():
    nvosd.link(queue7)
    queue7.link(transform)
    transform.link(sink)
else:
    nvosd.link(queue7)
    queue7.link(sink)

These have been taken from image-multidata test app.
You can directly link queue1 to nvvidconv1 if pgie is not used. no need to add to queue2 and then queue2 to the rest of the elements.
Note that nvvidconv and nvvidconv1 are different.
Do not forget to initialise the extra queues and add nvvidconv1 and filter1 to the pipeline

2 Likes

what are these queues sued for. thay are.t present in my imagedata eg

thanks, now i am able to save frames, but if i am using the same pipeline as imagedata how is this example different? and how am I able to get 30fps on each stream, while on image data example my fps was decreased by a factor of n if n streams are used.

hello this works when i am sing a video file as source but if i use usb cam, i am getting the same error again with the above pipeline. I’ve modified my “create_source_bin as follows”
def create_source_bin(index,uri):
print(“Creating source bin”)

# Create a source GstBin to abstract this bin's content from the rest of the
# pipeline
bin_name="source-bin-%02d" %index
print(bin_name)
nbin=Gst.Bin.new(bin_name)
if not nbin:
    sys.stderr.write(" Unable to create source bin \n")

# Source element for reading from the uri.
# We will use decodebin and let it figure out the container format of the
# stream and the codec and plug the appropriate demux and decode plugins.

# uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
# if not uri_decode_bin:
#     sys.stderr.write(" Unable to create uri decode bin \n")
# # We set the input uri to the source element
# uri_decode_bin.set_property("uri",uri)
print("Creating Source \n ")
source = Gst.ElementFactory.make("v4l2src", "usb-cam-source")
if not source:
    sys.stderr.write(" Unable to create source \n")

caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
if not caps_v4l2src:
    sys.stderr.write("Could not create caps_v4l2src")

    # videoconvert to make sure a superset of raw formats are supported
vidconvsrc = Gst.ElementFactory.make("videoconvert", "convertor_src1")
if not vidconvsrc:
    sys.stderr.write(" Unable to create videoconvert \n")

# nvvideoconvert to convert incoming raw buffers to NVMM Mem (NvBufSurface API)
nvvidconvsrc = Gst.ElementFactory.make("nvvideoconvert", "convertor_src2")
if not nvvidconvsrc:
    sys.stderr.write(" Unable to create Nvvideoconvert \n")

caps_vidconvsrc = Gst.ElementFactory.make("capsfilter", "nvmm_caps")
if not caps_vidconvsrc:
    sys.stderr.write(" Unable to create capsfilter \n")

caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw, framerate=15/1"))
caps_vidconvsrc.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM)"))

source.set_property('device', uri)

nbin.add(source)
nbin.add(caps_v4l2src)
nbin.add(vidconvsrc)
nbin.add(nvvidconvsrc)
nbin.add(caps_vidconvsrc)

source.link(caps_v4l2src)
caps_v4l2src.link(vidconvsrc)
vidconvsrc.link(nvvidconvsrc)
nvvidconvsrc.link(caps_vidconvsrc)

# Connect to the "pad-added" signal of the decodebin which generates a
# callback once a new pad for raw data has beed created by the decodebin

# uri_decode_bin.connect("pad-added",cb_newpad,nbin)
# uri_decode_bin.connect("child-added",decodebin_child_added,nbin)

# We need to create a ghost pad for the source bin which will act as a proxy
# for the video decoder src pad. The ghost pad will not have a target right
# now. Once the decode bin creates the video decoder and generates the
# cb_newpad callback, we will set the ghost pad target to the video decoder
# src pad.

# Gst.Bin.add(nbin,uri_decode_bin)

srcpad = caps_vidconvsrc.get_static_pad("src")

bin_pad=nbin.add_pad(Gst.GhostPad.new("src",srcpad))

if not bin_pad:
    sys.stderr.write(" Failed to add ghost pad in source bin \n")
    return None
return nbin

and my pipeline is

    print("Linking elements in the Pipeline \n")
streammux.link(queue1)
queue1.link(queue2)
queue2.link(nvvidconv1)
nvvidconv1.link(queue3)
queue3.link(filter1)
filter1.link(queue4)
queue4.link(tiler)
tiler.link(queue5)
queue5.link(nvvidconv)
nvvidconv.link(queue6)
queue6.link(nvosd)
if is_aarch64():
    nvosd.link(queue7)
    queue7.link(transform)
    transform.link(sink)
else:
    nvosd.link(queue7)
    queue7.link(sink)

I am getting the error

Using winsys: x11
nvbufsurface: invalid colorFormat 0
nvbufsurface: Error in allocating buffer
Error(-1) in buffer allocation

** (python3:18546): CRITICAL **: 12:55:53.648: gst_nvds_buffer_pool_alloc_buffer: assertion ‘mem’ failed
Error: gst-resource-error-quark: failed to activate bufferpool (13): gstbasetransform.c(1670): default_prepare_output_buffer (): /GstPipeline:pipeline0/GstBin:source-bin-00/Gstnvvideoconvert:convertor_src2:
failed to activate bufferpool
Exiting app
@Fiona.Chen

For usbcam you have to use the same pipeline as test app 2 before adding pgie. The video decoding is different for usb and rtsp/file. After pgie it should be same as imagedata-multistream.
Source bin is for multiple sources I guess. so it is for multiple file or rtsp. I have not tried multiple usbcam but for single usbcam test app2(source pipeline) + imagedata-multistream(sink pipeline) works fine.
I think the conversion of frames to numpy array takes up a lot of resources. I limited my rtsp stream to skip 15 frames. This enabled me to run a real time application (pgie+tracker) upto 4 sources in Jetson nano.

in my case it does not take so much resources as i am not using pgie.

The frame conversion itself is resource hungry. so converting every frame and writing will reduce fps. I usually let my app do conversion once every second.