TX2 DS4.0:GenerateTRTModel():failed to create network using custom network creation function

Hello,
I want to using obeject detector yolo to run Slim yolov3 model,but there is an error when creating TRT engine.

1.How to update the yolo tensorRT code to run slim yolov3 model? I have the slim yolov3 .weight and .cfg file.

2.what’s the function of yolov3-calibration.table.trt5.1? Does it need to update it?

[code][/Using winsys: x11
Creating LL OSD context new
0:00:01.220224192 10517 0x55b1d5c660 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
Yolo type is not defined from config file name:…/…/…/objectdetector_yolo/data/prune_0.5_0.5_0.7.cfg
0:00:01.222722752 10517 0x55b1d5c660 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed to create network using custom network creation function
0:00:01.222797248 10517 0x55b1d5c660 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:01.223218816 10517 0x55b1d5c660 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:01.223263936 10517 0x55b1d5c660 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /home/nvidia/TX2/tensorRT/deepstream_sdk_v4.0_v7/sources/objectDetector_Yolo/config_infer_primary_yoloV3.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
** ERROR: main:655: Failed to set pipeline to PAUSED
code]

Hi,

It looks like there is no YOLO layer inside your model.

Yolo type is not defined from config file name:../../../objectdetector_yolo/data/prune_0.5_0.5_0.7.cfg

Would you mind to check if the layer’s name is updated or is replaced by other layers.
Thanks.

Hello,
Thanks a lot for your support, It’s OK after I renamed .cfg and .weight file with yolov3.

But there is another question, the maxpool layer will diminished input layer dimensions(w,h) when maxpool size greater than 2.

for example:
1.[maxpool]
stride=1
size=5

2.input–>12 x 19 x 19

3.output diminished -->12 x 15 x 15

How solve this problem? I want to keep output dimensions(w,h) no changed.

(76)  conv-bn-leaky   864 x  19 x  19      11 x  19 x  19    10909193
(77)  conv-bn-leaky    11 x  19 x  19      33 x  19 x  19    10912592
(78)  conv-bn-leaky    33 x  19 x  19      12 x  19 x  19    10913036
(79)  maxpool          12 x  19 x  19      12 x  15 x  15    10913036
(80)  route                  -             12 x  19 x  19    10913036
(81)  maxpool          12 x  19 x  19      12 x  11 x  11    10913036
(82)  route                  -             12 x  19 x  19    10913036
(83)  maxpool          12 x  19 x  19      12 x   7 x   7    10913036
0:00:03.133510912  8139   0x558c324260 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): route_83: all concat input tensors must have the same dimensions except on the concatenation axis
1 Like

Hi,

It looks like you want the ‘SAME’ padding mode for pooling layer, right?
The identical size for both input and output.

Here is the function may meet your requirement:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_pooling_layer.html#ad73004cbb5c1fac4d53759f2efb63b9c

Thanks.