How to use preprocessed input tensors attached as metadata

• Hardware Platform (Jetson / GPU) Jetson Nano 4Gb Dev Kit
• DeepStream Version5.1
• JetPack Version (valid for Jetson only) 4.5.1 rev1
• TensorRT Version7.1.3

Gst-nvinfer — DeepStream 6.1.1 Release documentation I read that we can use preprocessed input tensor attached as metadata for input for Primary Inference NvInfer.
Is there any example implementation of the same? I have taken a look at the sample apps and didn’t find anything similar, am i missing something?
How to attach input tensor as metadata and how will the nvinfer plugin use it for inference?

Hi,

You can find an example in the below folder:

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-infer-tensor-meta-test

Thanks

Hello @AastaLLL , i have looked at this example its about attaching the output data of previous or primary infer plugin to secondary or other infer plugins.
my question is quite the reverse of it. That is how to attach preprocessed image data as metadata so that it can be used for inference? isnt this what the property input-tensor-meta is for? how to use this?

any update on this ?

Hi,

Sorry for the missing.

Could you check if our NvDsPreProcess sample below can meet you requirement?

/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-preprocess-test

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.