Deepstream peoplenet transformer error in src_bin_muxer

Please provide complete information as applicable to your setup.

• DeepStream Version - 7.0
• None
• A100
• Bugs
• src_bin_muxer/gstnvstreammux.cpp

max_fps_dur 8.33333e+06 min_fps_dur 2e+08
** ERROR: main:706: Failed to set pipeline to PAUSED
ERROR from src_bin_muxer: Batch size not set
Debug info: gstnvstreammux.cpp(1523): gst_nvstreammux_change_state (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer
App run failed

which sample are you testing? from the error log, nvstreammux’s batch-size is not set.

I was trying to run the peoplenet transformer
How to set nvstreammux’s batch-size?

for example, g_object_set (G_OBJECT (streammux), “batch-size”, 1, NULL);

I have a config file and a label file
where can I edit batch size?
Also, max batch size is already defined in config file here

Did you modify the code? without the modification, I can’t reproduce this issue. here is my log. log.txt (2.8 KB)
peoplenet_transformer/config.pbtxt is model configuration for triton mode. it is not related to “Batch size not set” error.

I started a fresh container for deepstream-7.0
as stated on quickstart guide on website:
docker run --gpus "device=0" -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -e CUDA_CACHE_DISABLE=0
went to this path
and ran the command as:
deepstream-app -c config.pbtxt

initially it was giving an error on “output width not set”
so set this environment variable as export USE_NEW_NVSTREAMMUX=yes
as listed here: Gst-nvstreammux New — DeepStream documentation 6.4 documentation
Then after running the same config.pbtxt again I got batch size error

Can you tell me how to run peoplenet_tranformer?

this command-line is wrong becasue config.pbtxt is not deepstram-app’s configuration file. if you want to test . you can use deepstream-app -c source1_primary_detector_peoplenet_transformer.txt. this file is in opt\nvidia\deepstream\deepstream-7.0\samples\configs\deepstream-app-triton. please refer to opt\nvidia\deepstream\deepstream-7.0\samples\configs\deepstream-app-triton\README.

actually I am getting same batch size error on all apps, it is not specific to peoplenet_transformer
for example

ran it on 2 different gpus RTX4500 , A100
same error is showing in new container

  1. output width not step → solved by setting environment variable export USE_NEW_NVSTREAMMUX=yes
  2. Batch size not set
    same error of batch size on both gpus
    on any app, not just peoplenet

config_infer_primary.txt is the configuration file of nvinfer. if you wan’t test deepstream-app, please refer to my comment on May 13.

Thank you for your response.
When following the peoplenet_transformer app instructions from readme I am getting the following error on running the script in samples directory

Thanks for the sharing! currently you can download the newer onnx model, here is the converting command-line.

Thanks for the commands.
I am getting this error

My engine file and onnx file are here

Is it a path issue?
Or I am doing something wrong?


the path and batch-size of engine need to be updated. please copy this patch (5.1 KB) to /opt/nvidia/deepstream/deepstream/samples, then run this patch to generate an engine. after finishing, there will be an engine model.plan in /opt/nvidia/deepstream/deepstream/samples/triton_tao_model_repo/peoplenet_transformer/1. then try deepstream-app again.

I generated the model.plan at the path mentioned above.

where to update the path and batch size of engine?