Deepstream Python Bindings cannot access frame from deepstream-test-app3

Following are my hardware configurations:

Jetson Nano
DeepStream 5.0
JetPack Version 4.4
TensorRT Version
CUDA 10.2

Using deepstream-test-app3 in python I am able to run inference on RTSP feed. But when I try to convert the frames from Gst Buffer to numpy array using the following lines:


I get the following error:

RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format

I do not get this error with usb camera. I get this error when I run deepstream-test-app3 on a file or rtsp feed.

Can deepstream-imagedata-multistream in python work in your platform? You need to set capsfilter before tiler and after nvvideoconvert to convert the video format to RGBA, then you can get RGBA data in probe.

Yes imagedata-multistream worked. And I did discover that nvconv and capsfilter are needed to convert.

But the greatest issue I am facing right now is that imagedata-multistream is very slow on my platform which is due to the conversion function from gst_buffer to numpy array. and I really need the frames to perform further image processing operations, if not in real time then atleast at 1fps.

Can you add “sink.set_property(“sync”,0)” in the app and try? Currently deepstream-test-app3 is using nveglglessink, it is better to replace it with fakesink to test performance.

I have already set sink “sync” to 0. So I added nvvideoconvert and capsfilter to the deepstream-test-app3. my pipeline looks like this -->
source_bin -> streammux --> pgie --> tracker --> nvvidconv --> capsfilter --> tiler --> nvvidconv --> nvosd --> sink
There is no issue when I simply want to do inference. The issue comes when converting the gst_buffer to numpy array. then the whole inference slows down in Jetson Nano. i.e. when I comment out the 3 conversion lines it runs at its normal speed but slows down when I uncomment these lines.


Will you transfer the numpy array to jpeg image file? We will check the performance with Nano.

Yes. I am using opencv to write the numpy array to image. It is fine if I want to process @1fps. But for real time processing it is not feasible.

We have tested with Nano. The current deepstream-imagedata-multistream app without any OpenCV processing can run with an average of 7 frames per second.

How about using pillow instead of cv2? It is much lighter and probably faster than cv2:

I am using it like this:
from PIL import mage