ERROR from element Stream-muxer: Input buffer number of surfaces (1684472064) must be equal to mux->num_surfaces_per_frame (4)

i get the above error ,while running the local Video file

INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT Input           3x300x300       
1   OUTPUT kFLOAT NMS             1x200x7         
2   OUTPUT kFLOAT NMS_1           1x1x1           

0:00:02.358925785 10933 0x55dc260f7270 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/ubuntu/PoC/model/Primary_Bottle_SSD/ssd_resnet18_retrained_epoch_040_bo_99_bl_94_rej_84.etlt_b1_gpu0_fp32.engine
0:00:02.363315957 10933 0x55dc260f7270 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:PoC_pgie_config.txt sucessfully
Running...
ERROR from element Stream-muxer: Input buffer number of surfaces (1684472064) must be equal to mux->num_surfaces_per_frame (4)
        Set nvstreammux property num-surfaces-per-frame appropriately

Error details: gstnvstreammux.c(364): gst_nvstreammux_chain (): /GstPipeline:Bottle-pipeline/GstNvStreamMux:Stream-muxer
 END Running...===========

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) ==> T4
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

May I know what command did you run? Any config file as well?

Command:

./deepstream-app /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264

below are the initial settings=>



        g_object_set(G_OBJECT(data.streammux), "num-surfaces-per-frame", 4, NULL);

        g_object_set (G_OBJECT (data.streammux), "bufapi-version", TRUE, NULL);

        g_object_set (G_OBJECT (data.streammux), "maxperf", TRUE, NULL);

        g_object_set (G_OBJECT (data.caps), "bufapi-version", TRUE, NULL);

        g_object_set (G_OBJECT (data.caps), "maxperf", TRUE, NULL);

CONFIG FILE :



maintain-aspect-ratio=1
uff-input-blob-name=Input
batch-size=40
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=NMS

As mentioned in tlt user guide, please try with GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

THank you .