App Run Fails With Errors

• Hardware Platform: Jetson Nano
• DeepStream Version: 6.0.1
• DeepStream Install Method: .tar Package
• Power Mode: MAXN
• JetPack Version: 4.6.4 [L4T 32.7.4]
• TensorRT Version: 8.2.1.8
• NVIDIA GPU Driver Version: N/A

• How to reproduce the issue ?

I am trying to follow steps specified in this forum topic for Deepstream 6.0.1 and YoloV4 (since, i can’t use Deepstream 5.0): Get wrong infer results while Testing yolov4 on deepstream 5.0

When i run the deepstream with this command:

sudo deepstream-app -c deepstream_app_config_yoloV4.txt

I get this output:


(deepstream-app:15261): GStreamer-CRITICAL **: 13:51:54.109: passed '0' as denominator for `GstFraction'

(deepstream-app:15261): GStreamer-CRITICAL **: 13:51:54.109: passed '0' as denominator for `GstFraction'
** ERROR: <create_camera_source_bin:163>: Failed to link 'src_elem' (image/jpeg; video/mpeg, mpegversion=(int)4, systemstream=(boolean)false; video/mpeg, mpegversion=(int)2; video/mpegts, systemstream=(boolean)true; video/x-bayer, format=(string){ bggr, gbrg, grbg, rggb }, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-dv, systemstream=(boolean)true; video/x-h263, variant=(string)itu; video/x-h264, stream-format=(string){ byte-stream, avc }, alignment=(string)au; video/x-pwc1, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-pwc2, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-raw, format=(string){ RGB16, BGR, RGB, GRAY8, GRAY16_LE, GRAY16_BE, YVU9, YV12, YUY2, YVYU, UYVY, Y42B, Y41B, YUV9, NV12_64Z32, NV24, NV61, NV16, NV21, NV12, I420, BGRA, BGRx, ARGB, xRGB, BGR15, RGB15 }, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-sonix, width=(int)[ 1, 32768 ], height=(int)[ 1, 32768 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-vp8; video/x-vp9; video/x-wmv, wmvversion=(int)3, format=(string)WVC1) and 'src_cap_filter1' (video/x-raw, width=(int)0, height=(int)0)
** ERROR: <create_camera_source_bin:218>: create_camera_source_bin failed
** ERROR: <create_pipeline:1326>: create_pipeline failed
** ERROR: <main:688>: Failed to create pipeline
Quitting
App run failed

Here are config files:

config_infer_primary_yoloV4.txt (3.4 KB)

deepstream_app_config_yoloV4.txt (3.8 KB)

This is the nvdsparsebbox_Yolo.cpp after adding c++ functions:

nvdsparsebbox_Yolo.cpp (18.0 KB)

Thanks in advance for all the help you offer.

NOTE:

When i enter the command to test out deepstream:

sudo deepstream-app -c /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

I saw an error message with few warnings on the terminal. The Jetson got very slow and a saw a memory warning on my screen. After a while the window which was supposed to show to video popped up. But, half of the display was black and at the other half, a frame showed up and froze without playing any further. I got this output on the terminal:

deepstream-app_output_log.txt (16.2 KB)

When i try to test with that command for increased verbosity:

sudo deepstream-app -c /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt --gst-debug=3

I got this output:

detailed_deepstream-app_output_log.txt (33.6 KB)

You set “type=1” in the source0 section, whichs is for V4L2 Camera. You should use type=2 or 3 for your case.

I have set “type=1” in the source0 section. Since, i wanted it to infer on images coming from an usb camera. Thanks.

Then you need to update the uri in your config, something like "uri=v4l2:///dev/video0 " (Change /dev/video0 to your real camera device in your system).

I have set “type=2” and “uri=v4l2:///dev/video2”. Then, when i run deepstream app, i got this output:

new_deepstream_output_log.txt (2.5 KB)

Modify the following configruation item as your device.

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

More information about v4l2, you can refer this FAQ

Noticed that you are using a jetson nano. This device has a small memory.You can refer
source1_usb_dec_infer_resnet_int8.txt in /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app

I have modified [source0] as specified. After, the command sudo deepstream-app -c deepstream_app_config_yoloV4.txt i got this output:

latest_deepstream_yolov4_output_log.txt (2.5 KB)

Also, i have referred to FAQ on v4l2 and tried these commands there:

sudo gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=YUY2, width=640, height=480, framerate=30/1'  ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0  nvstreammux name=mux width=1280 height=720 batch-size=1  ! fakesink

And

sudo gst-launch-1.0  v4l2src device=/dev/video2 ! 'video/x-raw, format=YUY2, width=640, height=480, framerate=30/1' ! videoconvert ! 'video/x-raw, format=NV12' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12'  ! fakesink

Both gave me the same output:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

This is the output i got after the command: sudo deepstream-app -c /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt --gst-debug=3 :

new_source1_deepstream_output_log.txt (4.0 KB)

Thanks.

let’s focus on deepstream_app_config_yoloV4.txt first.

In your log
Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:964> [UID = 1]: RGB/BGR input format specified but network input channels is not 3

This means that your model file may have some problems.

Did you generate yolov4.engine according to the README?

If you want to run on Jetson, the engine file must be generated on Jeston,it is GPU related.

Previously i haven’t generated the yolov4.engine using the method in the README. Instead, i have followed this method: I got the onnx file for the pretrained yolov4 model from onnx model zoo. Then, have created yolov4.engine with this command:

trtexec --onnx=yolov4.onnx --saveEngine=yolov4.engine

on my Jetson Nano. At the directory where my yolov4.onnx file resides.

Now, i have tried to generate the yolov4.engine according to the README. I have managed to build the darknet i have cloned from https://github.com/AlexeyAB/darknet with cuDNN and OpenCV. I have cloned the the conversion repository and then, used this command to convert yolov4 model to onnx:

python3 demo_darknet2onnx.py ~/yolo_workspace/darknet/cfg/yolov4.cfg ~/yolo_workspace/darknet/data/coco.names ~/yolo_workspace/darknet/yolov4.weights ~/yolo_workspace/darknet/data/dog.jpg 1

After the command, i got this output: Illegal instruction (core dumped)

Thanks.

You can try the yolov4 provided by nvidia

The model you mentioned just tested on DS-5.0, It’s too old.Not sure it will work well with ds-6.0

Sorry, i couldn’t get which model you mean. Do you mean that one i got from onnx model zoo or one i got from github repository of darknet? Also, it must have slipped my attention. But, i couldn’t find a yolov4 model at the link provided.

Sorry for the wrong link.

This is right project.

Follow the README, you run Deepstream with yolov4 detetion.

It have slipped my mind previously, just as i thought, found this there:

link to yolov4 onnx

Then, followed the README. But, after the command:

deepstream-app -c deepstream_app_config_yoloV4.txt

I got this output:

** ERROR: <parse_config_file:542>: parse_config_file failed
** ERROR: <main:679>: Failed to parse config file 'deepstream_app_config_yoloV4.txt'
Quitting
App run failed

After that i checked on the readme inside deepstream_yolo, and i saw this line:

### 2.1 Please make sure DeepStream 6.1.1+ is properly installed ###

But, on the README it says:

2.1 Please make sure DeepStream 5.0+ is properly installed

Instead of 6.1.1

Thanks.

In the project i saw that i would need TensorRT OSS for yolov4. I have begun to follow this README. While following the readme, i have git cloned the right version of TensorRT OSS. But, when i enter the command: cd $TRT_OSSPATH, it moved me to “~”. Is that expected? Also, i don’t want to use docker and i couldn’t get which build method is the right one for that.

Maybe you are comparing different branches, and this problem is caused by the difference between main and master.

I have tried yolov4 onnx on DS-6.3, There are indeed some issues, we will check this.

I also ran yolov4 on ds-6.3 with this project and it worked.

No need for TensorRT OSS if you have git-lfs installed,Or try git lfs pull after git clone.

According to table later versions of Deepstream than 6.0.1, is not compatible for Jetson Nano. thus, i can’t use ds-6.3.

It doesn’t matter. I mean this project had tested on DS-6.0.1.

Just make sure to checkout the corresponding branch.

No need for TensorRT OSS if you have git-lfs installed,Or try git lfs pull after git clone .

I have cloned deepstream_tao_apps (tao 3.0 for ds-6.0.1) to “~”. Then, downloaded models and after that, installed git lfs and git lfs pull in the “~/deepstream_tao_apps” directory was successful with 3 files pulled. Do i still need to follow these steps or do i just need to modify pgie_yolov4_tao_config.txt at the “~/deepstream_tao_config.txt” as discussed and then use this command:

sudo deepstream-app -c pgie_yolov4_tao_config.txt

Thanks.

Make sure the ds-tao-detection with yolov4 can run successfully firstly.

Then try migrate to deepstream-app.

These items is for fine-tune,you can ignore it for the moment.

After the command:

./apps/tao_detection/ds-tao-detection  -c configs/yolov4_tao/pgie_yolov4_tao_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264 -d

I got this output:

deepstream_tao_detection_output_log.txt (4.9 KB)

Thanks.