Deepstream peoplenet transformer error in src_bin_muxer

Please provide complete information as applicable to your setup.

• GPU
• DeepStream Version - 7.0
• None
• A100
• Bugs
• src_bin_muxer/gstnvstreammux.cpp

Error:
max_fps_dur 8.33333e+06 min_fps_dur 2e+08
** ERROR: main:706: Failed to set pipeline to PAUSED
Quitting
ERROR from src_bin_muxer: Batch size not set
Debug info: gstnvstreammux.cpp(1523): gst_nvstreammux_change_state (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer
App run failed

which sample are you testing? from the error log, nvstreammux’s batch-size is not set.

I was trying to run the peoplenet transformer
How to set nvstreammux’s batch-size?

for example, g_object_set (G_OBJECT (streammux), “batch-size”, 1, NULL);

I have a config file and a label file
where can I edit batch size?
Also, max batch size is already defined in config file here

Did you modify the code? without the modification, I can’t reproduce this issue. here is my log. log.txt (2.8 KB)
peoplenet_transformer/config.pbtxt is model configuration for triton mode. it is not related to “Batch size not set” error.

No
I started a fresh container for deepstream-7.0
as stated on quickstart guide on website:
docker run --gpus "device=0" -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -e CUDA_CACHE_DISABLE=0 nvcr.io/nvidia/deepstream:7.0-triton-multiarch
went to this path
/opt/nvidia/deepstream/deepstream-7.0/samples/triton_tao_model_repo/peoplenet_transformer
and ran the command as:
deepstream-app -c config.pbtxt

initially it was giving an error on “output width not set”
so set this environment variable as export USE_NEW_NVSTREAMMUX=yes
as listed here: Gst-nvstreammux New — DeepStream documentation 6.4 documentation
Then after running the same config.pbtxt again I got batch size error


Can you tell me how to run peoplenet_tranformer?

this command-line is wrong becasue config.pbtxt is not deepstram-app’s configuration file. if you want to test . you can use deepstream-app -c source1_primary_detector_peoplenet_transformer.txt. this file is in opt\nvidia\deepstream\deepstream-7.0\samples\configs\deepstream-app-triton. please refer to opt\nvidia\deepstream\deepstream-7.0\samples\configs\deepstream-app-triton\README.

actually I am getting same batch size error on all apps, it is not specific to peoplenet_transformer
for example

ran it on 2 different gpus RTX4500 , A100
same error is showing in new container

  1. output width not step → solved by setting environment variable export USE_NEW_NVSTREAMMUX=yes
  2. Batch size not set
    same error of batch size on both gpus
    on any app, not just peoplenet

config_infer_primary.txt is the configuration file of nvinfer. if you wan’t test deepstream-app, please refer to my comment on May 13.

Thank you for your response.
When following the peoplenet_transformer app instructions from readme I am getting the following error on running the script prepare_ds_triton_tao_model_repo.sh in samples directory

Thanks for the sharing! currently you can download the newer onnx model, here is the converting command-line.

Thanks for the commands.
I am getting this error

My engine file and onnx file are here


Is it a path issue?
Or I am doing something wrong?

Thanks

the path and batch-size of engine need to be updated. please copy this patch pt_onnx.sh (5.1 KB) to /opt/nvidia/deepstream/deepstream/samples, then run this patch to generate an engine. after finishing, there will be an engine model.plan in /opt/nvidia/deepstream/deepstream/samples/triton_tao_model_repo/peoplenet_transformer/1. then try deepstream-app again.

I generated the model.plan at the path mentioned above.


where to update the path and batch size of engine?

  1. if you have run the pt.onnx_sh, please share the result of “ll /opt/nvidia/deepstream/deepstream/samples/triton_tao_model_repo/p
    eoplenet_transformer/1/” ? pt.onnx_sh will create the engine model.plan.
  2. if still can’t work. could you share more logs by the following step?
export GST_DEBUG=6 && deepstream-app -c source1_primary_detector_peoplenet_transformer.txt  >1.log 2>1.log



1.log (4.7 MB)
Seems, model.plan was plan was correctly generated still the app is not running

it is because creating render plugin failed.

  1. you can use filesink to make a recording. first please run /opt/nvidia/deepstream/deepstream/user_additional_install.sh to install software encoder because A100 does not support hardware encoding. then please set type=1 in [sink0], and set enable=1, enc-type=1 in [sink1].
  2. please refer to this topic. A100 is not display card. you can set up a virtual display.

Hi
I am still getting this error

1.log (10.1 MB)