This may require you to modify nvinfer element.
In NvDsInferContextImpl::queueInputBatch(NvDsInferContextBatchInput &batchInput) copy the corresponding tenosr to m_BindingBuffers like NvDsInferInitializeInputLayers
Thanks for your response, i studied the function but i have some questions as follows:
Will this function get called for every detected object in my pipeline?
i am sending the embeddings generated by recognition model as output tensor meta so how can i access them in this function as i couldn’t find any connection in data type NvDsInferContextBatchInput and the NvDsBatchMeta.
face detection image (source 0) --- |
| nvstreammux (batch-size=2) ---> nvdspreprocess_0(process source 0) --> nvdspreprocess_1( process source 1) --> nvinfer(batch-size = 1)
face swap image (source 1) -- |
I found a solution that does not require modifying nvinfer.
For the above pipeline, form the detected faces and the faces to a batch, and then nvdspreprocess_0 processes source 0 and nvdspreprocess_1 processes source 1 to get two non-image inputs, and then set the batch-size property of nvinfer to 1.
[source1] no images were imported.
2.The batch-size in [streammux] is 1.
In fact, the above figure is a workaround. form two images to a batch and then process them into two GstNvDsPreProcessBatchMeta through two nvdspreprocess elements. Finally, nvinfer sends the two GstNvDsPreProcessBatchMeta to the model
i guess you still haven’t understood my pipeline the face swap model expects 2 inputs one is the face image detected by the face detection model and other input is 1x512 dimensional embeddings generated by my face recognition model
i am able to pass the face detected image as input using preprocess plugin but i am not able to understand how can i do it for the embeddings as recognition model is not generating anything other than the embeddings so how can i set another preprocess plugin after that?
Use a fake image as the input source. This operation is just to add an input source.
Assume that the origin image is source_0, the fake source is source_1. The model have two input layers, input_0 and input_1.
Process source_0 as the model’s input_0 through nvdspreprocess_0.
You don’t have to do anything with source_1, then use nvdspreprocess_1 configuration item custom-input-transformation-function to replace the data you need as input_1 of the model.
When the input-tensor-meta property of nvinfer is configured as true, nvinfer will only read GstNvDsPreProcessBatchMeta attached by nvdspreprocess as input
Or use nvinferserver, this plugin supports models with multiple inputs
actually i tried to use two sources but i am not able to understand what should i put in the preprocess config file because as the input layer is of dimension 1x512 so i tried
# 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=2
but i am getting error that it is not supported and also what am i supposed to put in the following
# 0=process on objects 1=process on frames
process-on-frame=1
# processig width/height at which image scaled
processing-width=0
processing-height=512
# tensor shape based on network-input-order
network-input-shape=1;512
# 0=RGB, 1=BGR, 2=GRAY
network-color-format=2
There is currently no public sample code. The key configuration items are as follows:
preprocess_0 processes source_0 and outputs input_0 tensor
[property]
enable=1
......
# tensor shape based on network-input-order
network-input-shape=4;3;128;128
# tensor name same as input layer name
tensor-name=input_0 # check here
.....
[group-0]
src-ids=0 # check here
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=0
preprocess_1 processes source_1 and outputs input_1 tensor
[property]
enable=1
......
# tensor shape based on network-input-order
network-input-shape=4;3;128;128
# tensor name same as input layer name
tensor-name=input_1 # check here
.....
[group-0]
src-ids=1 # check here
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=0
If the above workaround is too cumbersome, you can consider using nvinferserver, which may require you to migrate some code.
hello thanks for your help actually i implemented this workaround and it is working for me but i am facing a major problem which is drop in FPS, without the additional secondary preprocess i was getting avg. fps of 15+ values whereas after adding this preprocess and a second source i am getting avg fps in between 10 to 12, so can you tell me why this is happening?
P.S i am getting a second source by using “num-source=2” in the source property of app config file.
Also why do you mean by
as i couldn’t find any property or plugin regarding this…so is it something similar to Fakesink? or do you meant anything else.