Get frames from model that use (tensor_metadata) to parse the output

Hello , I’m trying to save the frames from a deepstream
the model is the one mentioned in this topic :

How to make a parses function for my regression model - #24 by ayanasser

and since it’s not a detector i get the output from tensor_meta , I’ve got frames from another projects that different from this one by this line “osdsinkpad = pgie.get_static_pad(“src”)”, the working projects used
osdsinkpad = tiler.get_static_pad(“sink”)

and i could not use this line with this regression model because it wont enter the l_user loop,

So the error that came out when trying getting those frames using
" n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
"is:

RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format

I use the exact same way of saving with other projects that use detectors and it works just fine
• Hardware Platform ( GPU)
• DeepStream Version 5.0
• TensorRT Version 7.0.0

1 Like

Sorry for the late response, we will investigate to do the update soon.

Thank you, I am waiting but it’s kinda urgent, sol let me know once you’ve investigated ^^

Hi,
Please do provide the setup info!

And, as described in gst_element_send_nvevent_new_stream_reset — Deepstream Deepstream Version: 6.1.1 documentation , this function returns the frame in NumPy format. Only RGBA format is supported. What’s the format of the frame?

The setup is mentioned here, do you need more info ?
the frame provided by deepstream is already RGBA format “i did not modify that”

but the problem when I used
osdsinkpad = pgie .get_static_pad(“ src ”)” to make the code loop over the l_user this could not retrieve the frame,
but when I am using osdsinkpad = tiler.get_static_pad(“ sink ”)
I could retrieve the frame but the code does not loop over the l_user.

So its a trade off and I dnt know how to make the code do both at the same time

1 Like

@mchi @kayccc
any updates ?
I think I could provide you with the code and the models if you need it ^^

That’s cool~ it will save lots of time, you could put the repo in google drive and send private message with the link.

Thanks!

1 Like

Sir, check your inbox please ^^

Hi @ayanasser ,
Thanks! What GPU are you using? Because the TRT engine must be used on the same GPU and TRT version.
And, you are using DS5.0, not 5.0.1, right?

1 Like

The gpu is 2060super
tensorRT is 7.0
DS is 5.0

1 Like

unfortunately, I don’t have this GPU in hand, I can’t run the pipeline for now.

Is this model related ? Could you tell me which function and code line cause this issue? I may could try to port it to workable python DS sample to check this issue.

hello, we work on the same project together and we run it on other GPUs too
we tried running it on a 2080 and it runs just fine
we meant we used the 2060 for the TRT conversion
please try running it on whatever nvidia GPU you have

1 Like

Has this package been verfied working with realse DS5.0 docker?

I got this error

root@7c5ea989b94d:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/deepstream_python_apps/DeepstreamDeploy_CrowdCounting# python3 CC-multistream.py
Traceback (most recent call last):
  File "CC-multistream.py", line 42, in <module>
    from DataExchangeHub.Publisher import Publisher
ModuleNotFoundError: No module named 'DataExchangeHub'

you could comment that, we will not need it, but I will edit this script now and re upload it

1 Like

Hello, this is a notify that we re-upload it since then ^^

1 Like