LPRnet: NvBufSurfTransform failed with error -3 while converting buffer

• Hardware Platform (Jetson NX)
• DeepStream Version 5.1
• JetPack Version (4.5.1)
• TensorRT Version 7.1.3

Hello, Sir/Madam

I am using the deepstream-test5 for car license recognization.
When the program runs for a period of time, I got the error:

0:02:38.988309784  3510   0x55ac7daca0 WARN                 nvinfer gstnvinfer.cpp:1277:convert_batch_and_push_to_input_thread:<secondary_gie_1> error: NvBufSurfTransform failed with error -3 while converting buffer
0:02:38.994126902  3510   0x55ac7daca0 WARN                 nvinfer gstnvinfer.cpp:1997:gst_nvinfer_output_loop:<secondary_gie_0> error: Internal data stream error.
0:02:38.994174614  3510   0x55ac7daca0 WARN                 nvinfer gstnvinfer.cpp:1997:gst_nvinfer_output_loop:<secondary_gie_0> error: streaming stopped, reason error (-5)
0:02:39.038605066  3510   0x55ac7daca0 WARN                 nvinfer gstnvinfer.cpp:1277:convert_batch_and_push_to_input_thread:<secondary_gie_1> error: NvBufSurfTransform failed with error -3 while converting buffer
ERROR from secondary_gie_1: NvBufSurfTransform failed with error -3 while converting buffer
Debug info: gstnvinfer.cpp(1277): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_1
Tue Oct  5 13:14:31 2021
**PERF:  18.08 (16.61)	18.73 (16.89)	
Quitting
ERROR from secondary_gie_0: Internal data stream error.
Debug info: gstnvinfer.cpp(1997): gst_nvinfer_output_loop (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_0:
streaming stopped, reason error (-5)
ERROR from secondary_gie_1: NvBufSurfTransform failed with error -3 while converting buffer
Debug info: gstnvinfer.cpp(1277): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_1

This is my pgie config txt:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=model_output_labels.txt
model-engine-file=../../models/yolov4_Vehicle_640_384/yolov4_resnet18_pruned_b1_m64_int8.engine
int8-calib-file=../../models/yolov4_Vehicle_640_384/cal.bin
tlt-encoded-model=../../models/yolov4_Vehicle_640_384/yolov4_cspdarknet19_epoch_420.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;384;640
#uff-input-dims=3;384;640;0
uff-input-blob-name=input_1
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=7
interval=0
gie-unique-id=1
#is-classifier=0
process-mode=1
cluster-mode=3
output-blob-names=BatchedNMS
network-type=0
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../parser/libnvds_infercustomparser_tlt.so
output-tensor-meta=0

[class-attrs-all]
threshold=0.61
#pre-cluster-threshold=0.8
#group-threshold=1
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
#eps=0.7
minBoxes=0
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=28
detected-min-h=28
detected-max-w=1500
detected-max-h=800

[class-attrs-4]
threshold=100

[class-attrs-3]
threshold=100

This is my sgie0 config txt:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=model_output_labels.txt
model-engine-file=../../models/yolov4_licensePlate_416/yolov4_resnet18_pruned_b1_m64_int8.engine
int8-calib-file=../../models/yolov4_licensePlate_416/cal.bin
tlt-encoded-model=../../models/yolov4_licensePlate_416/yolov4_resnet18_epoch_200.etlt
tlt-model-key=nvidia_tlt
uff-input-dims=3;416;416;0
uff-input-blob-name=input_1
#infer-dims=3;461;416
maintain-aspect-ratio=1
uff-input-order=0
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=5
interval=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0;1;2;5
is-classifier=0
#network-type=0
#0: CPU
#1: GPU (dGPU only)
#2: Hardware (Jetson only)
process-mode=2
#no cluster
cluster-mode=3
input-object-min-height=28
input-object-min-width=28
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedLPNMSTLT
custom-lib-path=../parser/libnvds_infercustomparser_tlt.so


[class-attrs-all]
threshold=0.41
#pre-cluster-threshold=0.8
#group-threshold=1
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
#eps=0.7
minBoxes=0
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=32
detected-min-h=30
detected-max-w=180
detected-max-h=75

This is my sgie1 config txt:

[property]
gpu-id=0
model-engine-file=../../../../models/lprnet/lpr_us_onnx_b64.engine
labelfile-path=us_lp_characters.txt
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
#num-detected-classes=3
gie-unique-id=3
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
#custom-lib-path=nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
custom-lib-path=../liblpr_parser.so
process-mode=2
operate-on-gie-id=2
operate-on-class-ids=0;1;2;3;4
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0
input-object-min-width=30
input-object-min-height=28
classifier-threshold=0.51
classifier-async-mode=0
interval=0

[class-attrs-all]
threshold=0.51

When I disable sgie1(lprnet), the error disappear. What am i missing causing this error?
I found this solution, but didn’t work. Does YOLO can work correctly as secondary detector for deepstream5.1?

Hi,

You can define a detector in the secondary inference engine.
Below is an example for your reference:

If the sample doesn’t help, would you mind sharing a complete sample to reproduce this issue with us?

Thanks.

I changed the sgie detector(Yolov4) to detectnet_v2, then error disappear.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.