How to inference with a multi-input model that requires two streams of images

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson
• DeepStream Version : 6.3
• JetPack Version : 5.1.2
• TensorRT Version : 8.5.2.2
• Issue Type : questions

I have a multi-input model that requires two streams of images as input. I’m trying to modify NvDsInferInitializeInputLayers, but it doesn’t seem to work. How should I modify gst-nvinfer plugin and nvinfer library?
image

You can use our nvdspreprocess to customize your own tensor. You can refer to our deepstream-pose-classification demo.

In this demo, there are multiple fixed inputs that are non-image based. Are there any examples that involve multiple image inputs?

We don’t have a demo like that yet. But you can consider using multiple preprocesses to meet your needs.

...nvdspreprocess(image for the 1st input layer)-> \
nvdspreprocess(image for the 2nd input lyaer)->nvinfer->.....
1 Like

I have rewritten the nvdspreprocess_lib. How do I configure a PGIE in the deepstream-app configuration file to correspond to multiple nvdspreprocess instances? My model has two input tensors, input0 and input1. Do these two tensors require separate nvdspreprocess config files? The documentation for deepstream-app appears to provides a simple use case for nvdspreprocess.


Currently, deepstream-app does not support multiple pre-process configurations. Could you just write a simple demo, like ourdeepstream-test1, instead of the deepstream-app?

I referred to deepstream-app-test3 and created two instances of nvdspreprocess. Each nvdspreprocess prepares a tensor for an input.How do I use the gst_element_link_many function to link them in parallel to nvinfer?


They are not parallel to nvinfer, they are serial too. But you can configure the following properties src-ids for each preprocess to process data from a different source.

I am now able to obtain two tensors prepared by nvdsprocess, but they do not seem to be correctly input into nvinfer. If two nvdsprocess instances are serial, how the tensors they prepare will enter nvinfer. Will the tensors from these two instances automatically combine into one, or will they enter in some other way?




Let’s say that tensor0 generated by preprocess0, tensor1 generated by preprocess1, they are attached to the same gstbuffer. Then nvinfer will parse it automatically. You can refer to our source code gst_nvinfer_process_tensor_input in gstnvinfer.cpp.

1 Like

Thank you very much for your answer. Does this mean that I only need to add a queue element after preprocess0 and preprocess1? If I want to use the tensor prepared by preprocess in gstnvinfer.cpp , do I need to set the property ‘input-tensor-from-meta=1’? However, when I set this property, I encountered an error ‘Cuda Stream Synchronization failed’.


You can check the specific error code in the syncStream. Maybe there’s a problem with your memory handling. You can add some log and trace in the source code.

“I have resolved the CUDA stream bug. Here’s the current pipeline pic. Is this correct?”

Yes. The pipeline is correct.

“I modified the ONNX model to combine the two inputs into a single input with a batch size of 2, then enabled stream synchronization, and now I can directly use the deepstream-app to infer the correct results.”

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.