Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
pipeline.txt (1.5 KB)
deepstream-app -c pipeline.txt
The inference model uses yolov5，The picture is my graphics card resource usage
This is a bug in version 6.1, Please update to version6.2
With your config file, It’s can works as expected.
update to version6.2 ,These problems do not occur with the resnet10 model in the sdk, but they still occur with the yolo model
deepstream-app -c /opt/nvidia/deepstream/deepstream-6.2/sources/objectDetector_Yolo/deepstream_app_config_yoloV2.txt
The results are as follows:
I think it’s a new bug.
If you modify value of gpu-id from 0 to 3(if you have more than 4 GPUS) in
The bug will be occur
config_infer_primary_yoloV2.txt if you use gpu-id=0 like
deepstream_app_config_yoloV2.txt. It’s will be ok.
There is a workaround.
set all gpu-id=0 and set
export CUDA_VISIBLE_DEVICES=gpu id which you want run program on it before program execution
We will look into this issue and will be back once there is any progress.