Deepstream - Jetson Xavier NX - Mode int8

Jetson Xavier NX
I am running the inference for yolov5x model and the fps is just about 20 fps
. How can I increase the FPS for this model. my deepstream config files are below.
config_infer_primary_yolov5.txt (780 Bytes)
deepstream_app_config_yolov5.txt (3.7 KB)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Here it is, @kayccc

you can use tensorrt tool - trtexec to try profile your onnx model with different batch, e.g.

$ sudo nvpmodel -m 2
$ sudo jetson_clocks
$ /usr/src/tensorrt/bin/trtexec --onnx=yolov5x6.onnx --explicitBatch --workspace=2048 --int8


$ /usr/src/tensorrt/bin/trtexec --onnx=yolov5x6.onnx --explicitBatch --workspace=2048 --fp16

is your ONNX model explicitBatch?

1 Like

@mchi This is is my model input, so I can convert it with --explicitBatch, right?

Looks your model has fixed batch, i.e. 1, you could try TensorRT/ONNX - to modify your model to be dynamic batch, so that you can try higher batch to get better perf

1 Like

Hi, sorry to ask an off-topic question, but how did you generate the output in both of your screenshots with the Jetpack version etc?

1 Like

should be GitHub - rbonghi/jetson_stats: 📊 Simple package for monitoring and control your NVIDIA Jetson [Xavier NX, Nano, AGX Xavier, TX1, TX2]

1 Like

Hi @hieptt ,
did you get your model converted to dynamic batch onnx model?