Please provide complete information as applicable to your setup.
• Hardware Platform GPU • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Below issue is observed when batch size is set to greater than 1.
nvinferserver gstnvinferserver_impl.cpp:816:processInputTensor:<secondary_inference1> error: Mismatch in input tensor batch sizes 32 vs 18, if input-tensors are non-batched, update config with
input_tensor_from_meta { is_first_dim_batch: false }
In my pipeline I am using nvdspreprocess plugin, nvdspreprocess plugin is processing on objects. My secondary inference is doing inference on tensor meta created by nvdspreprocess plugin. input_tensor_from_meta property is set to true.
My pipeline is works fine with a batch contains a single frame.
Below issue is observed when a batch contains 2 or more frames.
nvinferserver gstnvinferserver_impl.cpp:816:processInputTensor:<secondary_inference1> error: Mismatch in input tensor batch sizes 32 vs 18, if input-tensors are non-batched, update config with
input_tensor_from_meta { is_first_dim_batch: false }
could you share the the configuration files of nvdspreprocess and nvinferserver? what the whole media pipeline? which sample are you testing or referring to?
nvinferserer low-level is opensource. can you add log to print inputs->getSize() and m_Backend->getInputLayerSize() in InferBaseContext::doInference of \opt\nvidia\deepstream\deepstream-6.4\sources\libs\nvdsinferserver\infer_base_context.cpp? for exmaple,
printf(“inputs->getSize:%d,m_Backend->getInputLayerSize:%d\n”,
inputs->getSize(), m_Backend->getInputLayerSize());
assert(inputs->getSize() == m_Backend->getInputLayerSize());
especially you need to rebuild the so and replace /opt/nvidia/deepstream/deepstream/lib/libnvds_infer_server.so with the new so.
need to check why inputs size is 2. I tested deepstream-preprocess-test with input_tensor_from_meta. inputs size is 1 even if the source number is 2. please use the following steps to narrow down this issue.
please add log in InferBaseContext::run, which will call doInference, to check if inputs’size is 2.
if inputs’size is 2, please add log in GstNvInferServerImpl::batchInference, which will call run, to check if batchArray is 2. for example, printf(“inputs->getSize():%d\n”, batchArray->getSize());
if batchArray is 2, please add log in GstNvInferServerImpl::processInputTensor to continue to check.
I am using preprocess plugin between PGIE and SGIE. Preprocess plugin operates on person class detected by PGIE. Preprocess max batch size is set to 32. The input’s size will be 1 if number of persons detected in PGIE less than max batch size(32). If the number of persons detected in PGIE 42 then preprocess plugin will create a two batches of size 32 and 10. So the input’s size will be 2.
could you share the whole media pipeline? please refer to sample opt\nvidia\deepstream\deepstream-6.4\samples\configs\deepstream-app-triton\source4_1080p_dec_preprocess_infer-resnet_tracker_preprocess_sgie_tiled_display_int8.txt. In this sample, the pipeline looks like "…preprocess → nvinfersrever(pgie) → preprocess → nvinfersrever(sgie1) → nvinfersrever(sgie)… ". you can use “deepstream-app -c source4_1080p_dec_preprocess_infer-resnet_tracker_preprocess_sgie_tiled_display_int8.txt” to test. the app run well with the four sources.
did you use preprocess before PGIE? please check target-unique-ids and unique-id in all all cfgs. if still can’t work, please check why tensors, which is passed to batchInference, is 2 in GstNvInferServerImpl::processInputTensor.
I am not using preprocess before PGIE.
My pipeline is as below
multiurisrcbin —> PGIE —>tracker —> preprocess —> SGIE —> msgconv —>message broker.
My pipeline is running with single source(file input). preprocess works on objects(person only).
if you set Preprocess max batch to 43, can the app run well?
after I set network-input-shape= 1;3;224;224 in config_preprocess_sgie.txt, I still can’t reproduce that " input size=2" issue by source4_1080p_dec_preprocess_infer-resnet_tracker_preprocess_sgie_tiled_display_int8.txt. will continue to check.
Pipeline works fine if we set batch size set to more than the objects detected in PGIE. It works fine when we set preprocess max batch size to 64. When I change max batch size to 64 then the inputs size became 1. We can’t keep changing the max batch size as we can’t estimate number of objects in a frame.
Please ensure that preprocess works on objects instead of frames.
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
please help to reproduce this issue since we can’t reproduce.
are your two models onnx? could you share the details of all models? including inputs, outputs.
what is the difference functionality between preprocess plugin and preprocess_gpu?
can you reproduce this issue on DS6.4? could you provide a simplify code project to reproducing this issue? including models, code and cfgs? then we can debug directly. You can use forum private email. please click forum avatar-> personal messages->new message.