Gstreamer NVMM memory buffer to torch tensor with zero copy

Hi, I’m trying to convert Gstreamer buffer to torch tensor without going through numpy array. For several reasons, I don’t want to use Deepstream elements (I don’t need inference or visualisation, just torch tensor frame buffer from decoded video to reduce latency introduced by buffer copies).

How can I achieve this using simple buffer probe?

pipeline = Gst.parse_launch(f'''
    filesrc location=video.mp4 num-buffers=200 !
    decodebin !
    nvvideoconvert !
    video/x-raw(memory:NVMM),format=RGBA !
    fakesink name=s
''')
def on_frame_probe(pad, info):
    buf = info.get_buffer()
    caps_structure = caps.get_structure(0)
    height, width = caps_structure.get_value('height'), caps_structure.get_value('width')

    is_mapped, map_info = buf.map(Gst.MapFlags.READ)
    if is_mapped:
        try:
            # Add code for conversion here.
            return Gst.PadProbeReturn.OK

Please guide me on how can I solve this? I’m not much familiar with C/C++. I’m using deepstream docker image 6.4-triton-multiarch.
Architecture: x86-64 with Nvidia DGPU

Thanks

Yes. You can try to add a probe to the src_pad of nvvideoconvert and use deepstream_imagedata-multistream.py to get the buffer.

1 Like

Thank you I was successfully able to cast it to torch tensor from cupy.
Is it possible to do so without using pyds library bindings?

NO. You have to use pyds if you use the method in the deepstream_imagedata-multistream.py.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.