I met difficulties when running yolov4 on the x86 platform

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) V100
• DeepStream Version5.1.0
• TensorRT Version7.2.2.3
**• NVIDIA GPU Driver Version (valid for GPU only)**460.73.01
image

  1. I installed TensorRT-7.2.2.3 and can run sample_mnist successfully. Next, I downloaded TensorRT(https://github.com/NVIDIA/TensorRT/tree/21.02) and followed the steps to build the TensorRT OSS plugin(https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/x86)
  2. I downloaded TLT modles(wget https://nvidia.box.com/shared/static/i1cer4s3ox4v8svbfkuj5js8yqm3yazo.zip -O models.zip) and used the .etlt model of yolov4(yolov4_resnet18.etlt)
  3. Compiled nvdsinfer_custombboxparser_tlt.cpp(deepstream_tlt_apps/post_processor at master · NVIDIA-AI-IOT/deepstream_tlt_apps · GitHub)
  4. Execute this command deepstream-app -c lp_test.txt , no exception is reported
  5. Result: The target cannot be detected, but in the upper left corner of the video, the label keeps flashing.

    The following are the relevant documents:
    yolov4_labels.txt (29 Bytes) lp_test.txt (3.8 KB) pgie_yolov4_tlt_config.txt (2.1 KB)
    May I ask what causes this to happen? Hope you can give me some suggestions to solve this problem.

thank you very much!

Hi @1210586191 ,
This is a known issue - GitHub - NVIDIA-AI-IOT/deepstream_tlt_apps: Sample apps to demonstrate how to deploy models trained with TLT on DeepStream
Will check if there is solution now and get back to you later

maybe you could take a try this yolov4 GitHub - NVIDIA-AI-IOT/yolov4_deepstream