Memory deepstream triton

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 8.4.1.5
GPU Type: NVIDIA GeForce GTX 1660
Nvidia Driver Version:
CUDA Version: 11.6
CUDNN Version: 8.4.0

I was deployed yolov8 to deepstream6.1 triton and run with 2 source(source0 and source1), each source using 3 num-sources. I have 6GB GPU memory but deepstream just use 2GB to run deepstream-app. How can I set up for using all memory of my machine. Thanks!!!

Here is my config:
[source0]
enable=1
enable_cuda_buffer_sharing=1
type=3
uri=file://…/…/streams/sample_1080p_h264.mp4
num-sources=3
gpu-id=0
cudadec-memtype=0

[source1]
enable=1
enable_cuda_buffer_sharing=1
type=3
uri=file://…/…/streams/sample_1080p_h264.mp4
num-sources=3
gpu-id=0
cudadec-memtype=0
[streammux]
gpu-id=0
live-source=0
batch-size=6
[primary-gie]
enable=1
#(0): nvinfer; (1): nvinferserver
plugin-type=1
#infer-raw-output-dir=triton-output
batch-size=1
interval=0
gie-unique-id=1
config-file=config_infer_plan_engine_primary.txt
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

We do not have a limit on GPU memory usage. Does your pipeline only require 2GB of memory?

when I config 2 num-source fps=30, 6 num-source fps=16, 30 num-source fps=5. But memory just use 2GB of memory. i dont know why my memory dont scale up to get better performance. any way when I convert onnx with dynamic batch_size and convert onnx to engine. but i can’t config batch_size at [primary-gie] to >1. Error log “model expected the shape of dimension 0 to be between 1 and 1 but received 6”

No matter how many sources you add, the memory required for your nvinfer remains unchanged. You can consider trainning your model to dynamic batch.

Please give me some way to increase performance when i use deepstream with triton server. Can you give me some reference or config parameters to understand for my custom config

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

About the config parameters, you can refer to our guide: gst-nvinferserver.
If you want to increase performance, you can consider converting your model to dynamic batch as I advised before.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.