How to get the input of nvinfer in deepstream python app?

• Hardware Platform (Jetson / GPU) RTX 3060
• DeepStream Version deepstream:5.1-21.02-devel
• TensorRT Version 7.2.2-1+cuda11.1
• NVIDIA GPU Driver Version (valid for GPU only) 510.60.02
• Issue Type( questions, new requirements, bugs) Question

I am trying to extract the input of sgie-classifier, which is a nvinfer. My pipeline is same as deepstream-test2, I found a solution in this post but don’t know exactly how to implement this to extract the input in python application because it’s for the cpp application? As, I am a novice in deepstream development, looking forward for the solution for python application.

1 Like

you could use a probe similar to this blog.

1 Like

Thank you for the help, but this blog-post tells us how to get the object’s metadata. I want to grab the input of nvinfer, e.g. what is the actual image going as the input of sgie

Can you share why you need dump the input of nvinfer?

My sgie is not giving the required accurate results, so I just wanted to check what is it’s input? how the images are preprocessed just before going to sgie-nvinfer.

Hope is helps for inference accuracy:

Thank you, setting these parameters improves performance a lot… But I still want to know if there is a way to extract the input of nvinfer or it is not possible in python right now?

It gives segmentation fault when I try to convert RGBA to BGR using opencv

1 Like

It worked when I changed mem type of nvvidconv to Unified CUDA Memory.

How are you going with this? I’m trying to test the same.

My model was trained using pytorch normalization and resize. So I’d like to verify the pre-processing nvinfer is doing is equivalent.

The nvinfer module includes normalization and I think also resizing by default. This happens inside the plugin, after the sink pad. So I’m quite sure a sink pad probe on nvinfer wont get me the exact network input.

Now I’m trying to use the nvdspreprocess module. Since it is a separate module in the pipeline, we should be able write a source pad to probe its output.

According to the docs, nvdspreprocess creates a “User Metadata at batch level (NvDsPreProcessBatchMeta)”

I’ve been able to write half of the python probe, and can confirm I see User Metadata at the batch level

However, pyds doesnt seem to have bindings for the NvDsPreProcessBatchMeta yet…

def probe_nvdspreprocess_pad_src_data(pad, info):
    print('entered probe')

    gst_buffer = info.get_buffer()
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_user = batch_meta.batch_user_meta_list
    while l_user is not None:
        try:
            user_meta = pyds.NvDsUserMeta.cast(l_user.data)
        except StopIteration:
            break
        if user_meta:
            try:
                print(user_meta)
                """ I should  be able to cast to a NvDsPreProcessBatchMeta here, for example...
                preProcessBatch = pyds.NvDsPreProcessBatchMeta.cast(user_meta.user_meta_data)
                """
            except StopIteration:
                break

    return Gst.PadProbeReturn.OK

I glanced through the code, I can see that the struct exists in the NvDsPreProcessBatchMeta c code, but cant find the equivalent python binding.

Not sure if this is the way to get the information we are after, but thought I would throw it in :)

1 Like

Yes, nvdspreprocess will prepare the input tensor. Can you add bindings for the NvDsPreProcessBatchMeta by yourself?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.