ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox()

cuda-11.3 + cuDNN-8.2
Quadro RTX 5000 dual GPU
Driver Version: 470.82.00
CUDA Version: 11.4
Ubuntu 18.04
python 3.6
Yolo_v4

Deepstream 6

I am running Deepstream_python_apps

I can inference using the default model,but can’t using my TAO built custom model
I successfully ran this model on deepstream_tao_apps
/home/vaaan/Downloads/deepstream_tao_apps/configs/yolov4_tao

python /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/deepstream_test_1.py /home/vaaan/Desktop/test2.h264

ERROR:

python /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/deepstream_test_1.py /home/vaaan/Desktop/test2.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /home/vaaan/Desktop/test2.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

0:00:00.233854559 12065 0x3223d20 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:00.233968484 12065 0x3223d20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.1.0
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
WARNING: [TRT]: TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.1.0
0:04:24.541285358 12065 0x3223d20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/vaaan/Downloads/cuda11.3-trt8.0-20210820T231234Z-001/cuda11.3-trt8.0/export_0.1_prune/yolov4_resnet18_epoch_080.etlt_b1_gpu0_fp16.engine successfully
WARNING: [TRT]: TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.1.0
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x608x608
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200

ERROR: [TRT]: Cannot find binding of given name: conv2d_bbox
0:04:24.550448655 12065 0x3223d20 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 1]: Could not find output layer ‘conv2d_bbox’ in engine
ERROR: [TRT]: Cannot find binding of given name: conv2d_cov/Sigmoid
0:04:24.550482725 12065 0x3223d20 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 1]: Could not find output layer ‘conv2d_cov/Sigmoid’ in engine
0:04:24.552099090 12065 0x3223d20 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
0:04:24.802364229 12065 0x2326b70 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:04:24.802405754 12065 0x2326b70 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)

Here is my config file:

[property]
gpu-id=0

#net-scale-factor=0.0039215697906911373
#model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
#labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
#int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin

net-scale-factor=1

tlt-model-key=aGJuM2dxZG5345345345xc2h0ZXBqZGk6MzlkYjAxY2EtZWE2OC00NGRiLWI5ZmUtZWRlNDZjMTI4MjA5
model-engine-file=/home/vaaan/Downloads/deepstream_tao_apps/models/yolov4/yolov4_resnet18_epoch_080.etlt_b1_gpu0_fp16.engine
labelfile-path=/home/vaaan/Downloads/deepstream_tao_apps/configs/yolov4_tao/yolov4_labels.txt
int8-calib-file=/home/vaaan/Downloads/cuda11.3-trt8.0-20210820T231234Z-001/cuda11.3-trt8.0/export_0.1_prune/cal.bin
tlt-encoded-model=/home/vaaan/Downloads/cuda11.3-trt8.0-20210820T231234Z-001/cuda11.3-trt8.0/export_0.1_prune/yolov4_resnet18_epoch_080.etlt

force-implicit-batch-dim=1
batch-size=1
#network-mode=1
network-mode=2
num-detected-classes=12
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0
parse-bbox-func-name=NvDsInferParseCustomResnet

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

I tired changing the parse-bbox-func-name stilll no luck

Problem solved
changed the custom-lib-path to the file which Deepstream_toa_app Yolov4 pointed

custom-lib-path=/home/vaaan/Downloads/deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT

Thanks

1 Like

Glad to know issue resolved.

libnvds_infercustomparser_tao.so

I am trying to generate this in deepstream 6.0-sample container, seems like it’s not able to locate TensorRT headers.
I think that is because Tensorrt is not installed.

In file included from nvdsinfer_custombboxparser_tao.cpp:25:0:
/opt/nvidia/deepstream/deepstream-6.0/sources/includes/nvdsinfer_custom_impl.h:126:10: fatal error: NvCaffeParser.h: No such file or directory
 #include "NvCaffeParser.h"
          ^~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:49: recipe for target 'libnvds_infercustomparser_tao.so' failed
make[1]: *** [libnvds_infercustomparser_tao.so] Error 1
make[1]: Leaving directory '/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/deepstream_tao_apps/post_processor'
Makefile:24: recipe for target 'all' failed
make: *** [all] Error 2

So I tried to install TRT in deepstream container, there also it failed.

I was able to generate it deepstream6.0-devel container.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.