Please provide complete information as applicable to your setup.
• Hardware Platform: Jetson
• DeepStream Version : 6.3
• JetPack Version : 5.1.2
• TensorRT Version : 8.5.2.2
• Issue Type : questions
I have a multi-input model that requires two streams of images as input. I’m trying to modify NvDsInferInitializeInputLayers, but it doesn’t seem to work. How should I modify gst-nvinfer plugin and nvinfer library?
I have rewritten the nvdspreprocess_lib. How do I configure a PGIE in the deepstream-app configuration file to correspond to multiple nvdspreprocess instances? My model has two input tensors, input0 and input1. Do these two tensors require separate nvdspreprocess config files? The documentation for deepstream-app appears to provides a simple use case for nvdspreprocess.
Currently, deepstream-app does not support multiple pre-process configurations. Could you just write a simple demo, like ourdeepstream-test1, instead of the deepstream-app?
I referred to deepstream-app-test3 and created two instances of nvdspreprocess. Each nvdspreprocess prepares a tensor for an input.How do I use the gst_element_link_many function to link them in parallel to nvinfer?
They are not parallel to nvinfer, they are serial too. But you can configure the following properties src-ids for each preprocess to process data from a different source.
I am now able to obtain two tensors prepared by nvdsprocess, but they do not seem to be correctly input into nvinfer. If two nvdsprocess instances are serial, how the tensors they prepare will enter nvinfer. Will the tensors from these two instances automatically combine into one, or will they enter in some other way?
Let’s say that tensor0 generated by preprocess0, tensor1 generated by preprocess1, they are attached to the same gstbuffer. Then nvinfer will parse it automatically. You can refer to our source code gst_nvinfer_process_tensor_input in gstnvinfer.cpp.
Thank you very much for your answer. Does this mean that I only need to add a queue element after preprocess0 and preprocess1? If I want to use the tensor prepared by preprocess in gstnvinfer.cpp , do I need to set the property ‘input-tensor-from-meta=1’? However, when I set this property, I encountered an error ‘Cuda Stream Synchronization failed’.
You can check the specific error code in the syncStream. Maybe there’s a problem with your memory handling. You can add some log and trace in the source code.
“I modified the ONNX model to combine the two inputs into a single input with a batch size of 2, then enabled stream synchronization, and now I can directly use the deepstream-app to infer the correct results.”