Yolov4 on deepstream 5.0 no visual output

Please provide complete information as applicable to your setup.
Hardware Platform (Jetson / GPU): Jetson Xavier AGX
DeepStream Version: 5.0
JetPack Version (valid for Jetson only) 4.4
TensorRT Version 7.1.3-1+cuda10.2

Hi I followed the instructions posted over here: DeepStream SDK FAQ - #7 by bcao
And for some reason I dont understand I get this outout in shell but no visual output like the video with the marked detections…

Unknown or legacy key specified 'is-classifier' for group [property]
Opening in BLOCKING MODE 
0:00:03.976461743   698     0x26425b60 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/xavier_ssd/deepstream/deepstream-5.0/sources/objectDetector_Yolo/yolov4_1_3_608_608_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x608x608       
1   OUTPUT kFLOAT boxes           22743x1x4       
2   OUTPUT kFLOAT confs           22743x80        

0:00:03.976671001   698     0x26425b60 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /xavier_ssd/deepstream/deepstream-5.0/sources/objectDetector_Yolo/yolov4_1_3_608_608_fp16.engine
0:00:03.997042392   698     0x26425b60 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/xavier_ssd/deepstream/deepstream-5.0/sources/objectDetector_Yolo/config_infer_primary_yoloV4.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:181>: Pipeline ready

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
** INFO: <bus_callback:167>: Pipeline running

NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
H264: Profile = 66, Level = 0 
avg bitrate=0 for CBR, force to CQP mode

**PERF:  FPS 0 (Avg)	
**PERF:  28.07 (26.81)	
**PERF:  28.44 (28.37)	
**PERF:  28.54 (28.38)	
**PERF:  28.36 (28.39)	
**PERF:  28.41 (28.39)	
**PERF:  28.52 (28.43)	
**PERF:  28.35 (28.42)	
**PERF:  28.40 (28.42)	
**PERF:  28.45 (28.41)	
**PERF:  28.39 (28.41)

Any suggestions what I’m doing wrong?

In deepstream_app_config_yoloV4.txt, it’s configured to output yolov4.mp4, can’t you find this video in your local folder?

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=3
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
output-file=yolov4.mp4

Oh you are right. Thank you very much for pointing me again into the right direction.
I was trying so long and after i begun completly from 0 I forgot to change this :(

can you please share the config file you used to run yolov4. i cannot find functions in the c++ file to create an engine file for the yolov4 model.

You have to follow the Instruction over here DeepStream SDK FAQ - #7 by bcao to create an engine file. There you also find the config files I used.

I did not spend more time on the deepstream-app config cause I use the python example.
There you just have the problem with the batch size…
But I have to modify so many things for my project that need to us python instead of c++ where I dont have any skills…

1 Like

even i want to run yolov4 with python apps, i have my custom weights and config file. what are the steps?

Dude if you would have read the DeepStream SDK FAQ I linked above you would not ask this question…
There is discriped how to convert yolo weights to tensorrt engine for deepstream…
If you dont get deepstream running you can also check out this link GitHub - jkjung-avt/tensorrt_demos: TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet
Thats what I’m using at the moment. Its a bit slower than Deepstream but works great and everything you need to get it running is discriped very nice.

okay , thanks.