• Hardware Platform (Jetson / GPU) : NVIDIA Jetson AGX Orin
• DeepStream Version : 7.0
• JetPack Version (valid for Jetson only) : 6.0
• TensorRT Version : 8.6.2.3
• Issue Type( questions, new requirements, bugs) : question
I have such a code in Python for running model inference on Nvidia Jetson Orin:
...
Create nvstreammux element to form batches from one or more sources
streammux = create_pipeline_element("nvstreammux", "stream-muxer", "Stream Muxer")
streammux.set_property("width", 1920)
streammux.set_property("height", 1080)
streammux.set_property("batch-size", 32) # Set batch size to 32
streammux.set_property("batched-push-timeout", 4000000) # Adjust if needed
streammux.set_property("live-source", 1)
# Crate nvvidconv element to convert the input stream
nvvidconv = create_pipeline_element(
"nvvideoconvert", "convertor", "Video Converter"
)
nvvidconv.set_property("nvbuf-memory-type", 0)
# Create caps filter to convert the input stream to RGBA format
caps_rgb = create_pipeline_element("capsfilter", "nvmm_caps_rgb", "CapsFilter")
caps_rgb.set_property(
"caps",
Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA"),
)
# Create nvinfer element to run inference on the input stream from nvsreammux
pgie = create_pipeline_element("nvinfer", "primary-inference", "Primary Inference")
pgie.set_property("config-file-path", f"inference/{model_name}/config_infer.txt")
# Create fakesink element
sink = create_pipeline_element("fakesink", "waylandsink", "Wayland Sink")
This is configuration file for the model:
model-color-format=0 # 0=RGB, 1=BGR
onnx-file=models/new_arcing_model_nchw.onnx
model-engine-file=models/new_arcing_model_nchw.onnx_b32_gpu0_fp32.engine
labelfile-path=models/labels.txt
infer-dims=3;224;224
batch-size=32
network-mode=0 # 0=FP32, 1=INT8, 2=FP16 mode
network-type=1
num-detected-classes=1
process-mode=1
gie-unique-id=1
classifier-threshold=0.0
The problem that I have is number of batches that are being processed by nvstreamux. I set it to 32, increased the batched-push-timeout
but on live camera source it processes only 1 frame in a batch instead of 32. When switching to file source it can process only 6-9 frames at a time no matter how big is the batched-push-timeout
. What might be the cause of very few frames processed by nvstreamux in a batch? The model itself can take a batch of 32 images.