DeepStream4: deepstream-app -c config_infer_primary_yoloXX_XX.txt giving errors

When I try to run any of the Yolo "config_infer_primary"file s I get this:

nano@nano-desktop:/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo$ deepstream-app -c config_infer_primary_yoloV3_tiny.txt

(deepstream-app:6759): GStreamer-CRITICAL **: 11:17:55.923: gst_element_get_static_pad: assertion 'GST_IS_ELEMENT (element)' failed
Segmentation fault (core dumped)

But any of the YOLO “deepstream_app_config” run no problem:

nano@nano-desktop:/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo$ deepstream-app -c deepstream_app_config_yoloV3_tiny.txt

Any ideas?

Hi, have you been able to run the YOLO3 sample at INT8 precision? I got this warning:

nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.

Then the result engine is FP16 instead. Haven’t change anything in config_infer_primary_yoloV3.txt

[property]
gpu-id=0
net-scale-factor=1
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=yolov3.cfg
model-file=yolov3.weights
#model-engine-file=model_b1_int8.engine
labelfile-path=labels.txt
int8-calib-file=yolov3-calibration.table.trt5.1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=80
gie-unique-id=1
is-classifier=0
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

I get the same error when I try to run “INT8” on any of the yolo models.

I went over to the “objectDetector_SSD” folder and ran that example in “INT8” and it ran with no errors.

There is a note in that readme file that says:

NOTE: To use INT8 mode, the INT8 calibration file for the SSD model needs to be
provided along with changing the network-mode to 1 in config_infer_primary_ssd.txt.
Refer to sampleUffSSD for running the sample in INT8 mode. The sample writes the
calibration cache to file “CalibrationTableSSD”.

I dont know if that also pertains to the YOLO models?

BY the way
The original issue I had when I stated this was because I didnt realize that you just have to change and save the “config_infer_primary” files. They dont need to be ran on there own. DUH

I tried “objectDetector_SSD” and got the same error which is switching from INT8 to FP16.

From the note in README I managed to create the “CalibrationTableSSD” file, copy to the “objectDetector_SSD” folder and modify the config file with:

int8-calib-file=CalibrationTableSSD
network-mode=1

Still the same error like with the Yolov3 model. The only INT8 model working so far is from the TensorRT samples.

you need to run cmdline:
deepstream-app -c deepstream_app_config_yoloXXX.txt.
config_infer_xxx.txt is loaded from deepstream_app_config*.txt and specified to nvinfer plugin. you cannot specify it to deepstream-app directly.

Regarding “Calibration Table”, each model has each single INT8 table which cannot shared with other model file. Yolov3’s calibration table just for yolov3. it cannot be shared with yolov3-tiny, yolov2 or yolov2-tiny.

Another thing is about TRT version. All DS4.0’s INT8 calibration file are based on TRT5.1.x. Please ensure TRT version is correct.

Yes, I was using the calibration table for YOLOv3 provided in the same directory: “yolov3-calibration.table.trt5.1” and set network-mode to 1 in the config file, but running:

deepstream-app -c deepstream_app_config_yolo3.txt.

would give this warning:

nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.

The result model is FP16 instead of INT8.

Yes, as log printed, your platform doesnot support INT8 precision.
There’s a fallback strategy in nvinfer. INT8->FP16->FP32

I was using Jetson Nano and hoping it supports INT8 since FP16 is too slow. Any idea how Jetson Nano can stream 8 channels at 1080P 30FPS like advertised here?

If I remember correctly. Nano doesn’t support INT8.
To get high performance on nano, You may try source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt or source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt which are based on resnet10 and smaller input size.

Regarding YoloV3, detection is more accurate but more heavy network. To get better performance, you may try change input dims(width/height)to a smaller size on yolov3
e.g. yolov3.cfg

width=416
height=416

or use the tiny model. deepstream_app_config_yoloV3_tiny.txt.
some user may customize and retrain a smaller yolo network.

I got File does not exist : yolov3-tiny.cfg
My bad. I have not run ./prebuild.sh yet.

$ cd /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo
$ ./prebuild.sh

Is there a way to delete our own comment(s) at the forum?

Yes, It’s in README. Good see you find out.
I have no idea how to remove comments.

I am getting Following error
I have yolov3.weights and yolov3.cfg file in the same directory
no model specified : which model is it refering to

generateTRTModel(): No model files specified
initialize(): Failed to create engine from model files
error: Failed to create NvDsInferContext instance

[property]
gpu-id=0
net-scale-factor=1
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=yolov3.cfg
model-file=yolov3.weights
#model-engine-file=model_b1_int8.engine
labelfile-path=labels.txt
int8-calib-file=yolov3-calibration.table.trt5.1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=80
gie-unique-id=1
is-classifier=0
maintain-aspect-ratio=1
#parse-bbox-func-name=NvDsInferParseCustomYoloV3
#custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

The logic is

  1. check custom-lib-path, if can deserialize the model, then directly use that. make sure engine file exist. If there’s custom trt plugins, also need to register or preload the lib.
  2. If step 1 failed. It will check custom-lib-path to see whether custom model parser exist. nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so has yolo model-build inside.

WRT yoloV3, you always need keep the custom-lib-path, even model-engine-file was generated. because there’s also a custom trt plugin inside the model.
BTW, you also need enable parse-bbox-func-name since yolo’s bbox parsing is different.