Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) V100
• DeepStream Version5.1.0
• TensorRT Version7.2.2.3
**• NVIDIA GPU Driver Version (valid for GPU only)**460.73.01
- I installed TensorRT-7.2.2.3 and can run sample_mnist successfully. Next, I downloaded TensorRT(https://github.com/NVIDIA/TensorRT/tree/21.02) and followed the steps to build the TensorRT OSS plugin(https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/x86)
- I downloaded TLT modles(wget https://nvidia.box.com/shared/static/i1cer4s3ox4v8svbfkuj5js8yqm3yazo.zip -O models.zip) and used the .etlt model of yolov4(yolov4_resnet18.etlt)
- Compiled nvdsinfer_custombboxparser_tlt.cpp(deepstream_tao_apps/post_processor at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub)
- Execute this command
deepstream-app -c lp_test.txt
, no exception is reported
- Result: The target cannot be detected, but in the upper left corner of the video, the label keeps flashing.
The following are the relevant documents:
yolov4_labels.txt (29 Bytes) lp_test.txt (3.8 KB) pgie_yolov4_tlt_config.txt (2.1 KB)
May I ask what causes this to happen? Hope you can give me some suggestions to solve this problem.
thank you very much!