Using yolov4 model in deepstream-pose-classification

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) RTX3080
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) yolov4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3

I am using yolov4 model trained with tao version nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3.
Once used at primary detection model at deepstream-pose-classification, I have the following errors.


root@4ebf34b446f6:/workspace/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream_tao_apps/apps/tao_others/deepstream-pose-classification-new# ./deepstream-pose-classification-app ../../../configs/app/deepstream_pose_classification_config_new.yaml
width 1920 hight 1080
video file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/standrew_2.mp4
video file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/standrew_2.mp4
video file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/standrew_2.mp4
video file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/standrew_2.mp4
video file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/standrew_2.mp4
video file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/standrew_2.mp4
WARNING: Overriding infer-config batch-size (0) with number of sources (6)
WARNING: Overriding infer-config batch-size (0) with number of sources (6)
config_file_path:/workspace/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/bodypose_classification_tao/config_preprocess_bodypose_classification.txt
Now playing!
INFO: infer_trtis_backend.cpp:218 TrtISBackend id:4 initialized model: poseclassificationnet
WARNING: infer_utils.cpp:176 unsupported tensor order for dims to image-info, retry as kLinear
frameSeqLen:300
0:00:01.037037021   230 0x55bab98efd00 WARN           nvinferserver gstnvinferserver_impl.cpp:360:validatePluginConfig:<secondary-nvinference-engine> warning: Configuration file batch-size reset to: 6
INFO: infer_trtis_backend.cpp:218 TrtISBackend id:1 initialized model: bodypose3dnet
gstnvtracker: Loading low-level lib at /workspace/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Loading TRT Engine for tracker ReID...
[NvMultiObjectTracker] Loading Complete!
E0703 08:12:08.328856 230 logging.cc:40] 3: [runtime.cpp::~Runtime::346] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::346, condition: mEngineCounter.use_count() == 1. Destroying a runtime before destroying deserialized engines created by the runtime leads to undefined behavior.
)
[NvMultiObjectTracker] Initialized
0:00:01.284235891   230 0x55bab98efd00 WARN           nvinferserver gstnvinferserver_impl.cpp:360:validatePluginConfig:<primary-nvinference-engine> warning: Configuration file batch-size reset to: 6
INFO: infer_trtis_backend.cpp:218 TrtISBackend id:1 initialized model: yolov4
ERROR: infer_trtis_server.cpp:1146 Triton: Triton inferAsync API call failed, triton_err_str:Invalid argument, err_msg:[request id: 2] inference request batch-size must be <= 4 for 'yolov4'
ERROR: infer_trtis_backend.cpp:594 TRT-IS async inference failed., nvinfer error:NVDSINFER_TRITON_ERROR
ERROR: infer_trtis_backend.cpp:363 failed to specify dims when running inference on model:yolov4, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.367529106   230 0x55bab98efd00 ERROR          nvinferserver gstnvinferserver.cpp:408:gst_nvinfer_server_logger:<primary-nvinference-engine> nvinferserver[UID 1]: Error in specifyBackendDims() <infer_trtis_context.cpp:204> [UID = 1]: failed to specify input dims triton backend for model:yolov4, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.367540145   230 0x55bab98efd00 ERROR          nvinferserver gstnvinferserver.cpp:408:gst_nvinfer_server_logger:<primary-nvinference-engine> nvinferserver[UID 1]: Error in createNNBackend() <infer_trtis_context.cpp:292> [UID = 1]: failed to specify triton backend input dims for model:yolov4, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.367727125   230 0x55bab98efd00 ERROR          nvinferserver gstnvinferserver.cpp:408:gst_nvinfer_server_logger:<primary-nvinference-engine> nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:79> [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.367739361   230 0x55bab98efd00 WARN           nvinferserver gstnvinferserver_impl.cpp:592:start:<primary-nvinference-engine> error: Failed to initialize InferTrtIsContext
0:00:01.367748703   230 0x55bab98efd00 WARN           nvinferserver gstnvinferserver_impl.cpp:592:start:<primary-nvinference-engine> error: Config file path: /workspace/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream_tao_apps/configs/triton/yolov4_tao_new/pgie_yolov4_tao_config.yml
0:00:01.368002185   230 0x55bab98efd00 WARN           nvinferserver gstnvinferserver.cpp:518:gst_nvinfer_server_start:<primary-nvinference-engine> error: gstnvinferserver_impl start failed
size:20
ERROR: infer_trtis_server.cpp:885 Triton: failed to stop repo server, triton_err_str:Internal, err_msg:Exit timeout expired. Exiting immediately.
[NvMultiObjectTracker] De-initialized
Running...

ERROR from element primary-nvinference-engine: Failed to initialize InferTrtIsContext
Error details: gstnvinferserver_impl.cpp(592): start (): /GstPipeline:deepstream_pose_classfication_app/GstNvInferServer:primary-nvinference-engine:
Config file path: /workspace/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream_tao_apps/configs/triton/yolov4_tao_new/pgie_yolov4_tao_config.yml
Returned, stopping playback
Deleting pipeline

The config yaml file for yolov4 is
pgie_yolov4_tao_config.txt (2.2 KB)
.

The label file is here
yolov4_labels.txt (10 Bytes)

Seems that you are going to use yolov4 model in deepstream-pose-classification, I would transfer this topic to deepstream forum for better checking.

Let’s check this issue in DeepStream forum.

Could you attach your config.pbtxt file? Did you set the max_batch_size to 6 in this file?

This is the config.pbtxt file:
configpbtxt.txt (1.8 KB)

OK. The batch of your model seems to be fixed. Our code configures batch-size of nvinfer based on the number of sources.
There are two methods you can try.

  1. Train your model to be dynamic batch-size one
  2. Modify the code, comment out the part of if (pgie_batch_size != num_sources).

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.