Error:
max_fps_dur 8.33333e+06 min_fps_dur 2e+08
** ERROR: main:706: Failed to set pipeline to PAUSED
Quitting
ERROR from src_bin_muxer: Batch size not set
Debug info: gstnvstreammux.cpp(1523): gst_nvstreammux_change_state (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer
App run failed
Did you modify the code? without the modification, I can’t reproduce this issue. here is my log. log.txt (2.8 KB)
peoplenet_transformer/config.pbtxt is model configuration for triton mode. it is not related to “Batch size not set” error.
No
I started a fresh container for deepstream-7.0
as stated on quickstart guide on website: docker run --gpus "device=0" -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -e CUDA_CACHE_DISABLE=0 nvcr.io/nvidia/deepstream:7.0-triton-multiarch
went to this path /opt/nvidia/deepstream/deepstream-7.0/samples/triton_tao_model_repo/peoplenet_transformer
and ran the command as: deepstream-app -c config.pbtxt
initially it was giving an error on “output width not set”
so set this environment variable as export USE_NEW_NVSTREAMMUX=yes
as listed here: Gst-nvstreammux New — DeepStream documentation 6.4 documentation
Then after running the same config.pbtxt again I got batch size error
this command-line is wrong becasue config.pbtxt is not deepstram-app’s configuration file. if you want to test . you can use deepstream-app -c source1_primary_detector_peoplenet_transformer.txt. this file is in opt\nvidia\deepstream\deepstream-7.0\samples\configs\deepstream-app-triton. please refer to opt\nvidia\deepstream\deepstream-7.0\samples\configs\deepstream-app-triton\README.
Thank you for your response.
When following the peoplenet_transformer app instructions from readme I am getting the following error on running the script prepare_ds_triton_tao_model_repo.sh in samples directory
the path and batch-size of engine need to be updated. please copy this patch pt_onnx.sh (5.1 KB) to /opt/nvidia/deepstream/deepstream/samples, then run this patch to generate an engine. after finishing, there will be an engine model.plan in /opt/nvidia/deepstream/deepstream/samples/triton_tao_model_repo/peoplenet_transformer/1. then try deepstream-app again.
if you have run the pt.onnx_sh, please share the result of “ll /opt/nvidia/deepstream/deepstream/samples/triton_tao_model_repo/p
eoplenet_transformer/1/” ? pt.onnx_sh will create the engine model.plan.
if still can’t work. could you share more logs by the following step?
you can use filesink to make a recording. first please run /opt/nvidia/deepstream/deepstream/user_additional_install.sh to install software encoder because A100 does not support hardware encoding. then please set type=1 in [sink0], and set enable=1, enc-type=1 in [sink1].
please refer to this topic. A100 is not display card. you can set up a virtual display.