I have a requirement where I have to process Multiple stream of Videos for Video Analytics Solution. I took the reference example of “deepstream-imagedata-multistream” sample app and built My Application.
But in this example, the frame Buffers are Numpy Arrays. I need these buffers as GPU Tensors so that I can use them and save the compute.
I used Following Example by Paul Bridger: pytorch-video-pipeline/ghetto_nvds.py at master · pbridger/pytorch-video-pipeline · GitHub
and was able to access GPU Tensors. Unfortunately, this doesn’t work with Multistream Buffers, I was getting Empty Tensors for All the stream batches, except the first batch.
I just want to understand if there is a systematic way to access a Batch of Decoded image buffers directly as Tensors, and not Numpy Arrays.