Access Batched Decoded image buffers (NvBufSurface) with Deepstream + Python

I have a requirement where I have to process Multiple stream of Videos for Video Analytics Solution. I took the reference example of “deepstream-imagedata-multistream” sample app and built My Application.

But in this example, the frame Buffers are Numpy Arrays. I need these buffers as GPU Tensors so that I can use them and save the compute.

I used Following Example by Paul Bridger: pytorch-video-pipeline/ghetto_nvds.py at master · pbridger/pytorch-video-pipeline · GitHub

and was able to access GPU Tensors. Unfortunately, this doesn’t work with Multistream Buffers, I was getting Empty Tensors for All the stream batches, except the first batch.

I just want to understand if there is a systematic way to access a Batch of Decoded image buffers directly as Tensors, and not Numpy Arrays.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi @naman.bhayani,
Please provide the setup info as other topics.

Thanks!