I would like to know how to solve the batch size problem mentioned here

• Jetson Orin nano, Ubuntu 22.04
• Deepstream Version 6.4
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 8.6.2
/opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-nvdsanalytics

It operates deepstream-nvdsanalytics.py and tries to apply a self-learned role model when it operates

I am currently applying RTSP - OUT to this system.

Debugging like this is happening. I don’t understand what kind of batch size problem you are talking about here. Can anyone tell me??

Error: gst-library-error-quark: Batch size not set (5): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmultistream/gstnvstreammux.cpp(3094): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:Stream-muxer

Hello,

Your topic will be best served in the Jetson category.

I will move this post over for visibility.

Cheers,
Tom

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Have you changed the code? Can you run deepstream-test1.py normally?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.