• Hardware Platform (Jetson / GPU)
Jetson, NVIDIA Xavier NX 8GB
• DeepStream Version
DeepStream 6.3
• JetPack Version (valid for Jetson only)
JetPack 5.1.3-b29
• TensorRT Version
8.5.2.2
• Issue Type( questions, new requirements, bugs)
bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I am using deepstream-lpr-app sample app to recognize license plates in the attached video
Soon after launching the app by
sudo ./deepstream-lpr-app 1 3 0 infer mnhtn.mp4 output
NvBufSurfTransform fails with error -3 while converting buffer:
Frame Number = 172 Vehicle Count = 7 Person Count = 0 License Plate Count = 4
0:00:24.799289431 7406 0xaaaac24fd240 WARN nvinfer gstnvinfer.cpp:1463:convert_batch_and_push_to_input_thread:<secondary-infer-engine2> error: NvBufSurfTransform failed with error -3 while converting buffer
ERROR from element secondary-infer-engine2: NvBufSurfTransform failed with error -3 while converting buffer
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1463): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline/GstNvInfer:secondary-infer-engine2
Returned, stopping playback
0:00:24.821741286 7406 0xaaaac24fd2a0 WARN nvinfer gstnvinfer.cpp:2397:gst_nvinfer_output_loop:<secondary-infer-engine1> error: Internal data stream error.
0:00:24.821878571 7406 0xaaaac24fd2a0 WARN nvinfer gstnvinfer.cpp:2397:gst_nvinfer_output_loop:<secondary-infer-engine1> error: streaming stopped, reason error (-5)
[NvMultiObjectTracker] De-initialized
my configs are:
- lpd_yolov4-tiny_us.txt (secondary detector, detects license plates):
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=0
labelfile-path=../models/tao_pretrained_models/yolov4-tiny/usa_lpd_label.txt
model-engine-file=../models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt_b16_gpu0_int8.engine
int8-calib-file=../models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_cal.bin
tlt-encoded-model=../models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;480;640
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
interval=4
gie-unique-id=2
is-classifier=0
#network-type=0
cluster-mode=3
output-blob-names=BatchedNMS
#if scaling-compute-hw = VIC, input-object-min-height need to be even and greater than or equal to (model height)/16
input-object-min-height=60
#if scaling-compute-hw = VIC, input-object-min-width need to be even and greater than or equal to( model width)/16
input-object-min-width=100
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so
layer-device-precision=cls/mul:fp32:gpu;box/mul_6:fp32:gpu;box/add:fp32:gpu;box/mul_4:fp32:gpu;box/add_1:fp32:gpu;cls/Reshape_reshape:fp32:gpu;box/Reshape_reshape:fp32:gpu;encoded_detections:fp32:gpu;bg_leaky_conv1024_lrelu:fp32:gpu;sm_bbox_processor/concat_concat:fp32:gpu;sm_bbox_processor/sub:fp32:gpu;sm_bbox_processor/Exp:fp32:gpu;yolo_conv1_4_lrelu:fp32:gpu;yolo_conv1_3_1_lrelu:fp32:gpu;md_leaky_conv512_lrelu:fp32:gpu;sm_bbox_processor/Reshape_reshape:fp32:gpu;conv_sm_object:fp32:gpu;yolo_conv5_1_lrelu:fp32:gpu;concatenate_6:fp32:gpu;yolo_conv3_1_lrelu:fp32:gpu;concatenate_5:fp32:gpu;yolo_neck_1_lrelu:fp32:gpu
scaling-compute-hw = 1
[class-attrs-all]
pre-cluster-threshold=0.3
detected-min-w=32
detected-min-h=16
roi-top-offset=0
roi-bottom-offset=0
#group-threshold=1
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
#minBoxes=2
- lpr_config_sgie_us.txt (secondary classifier - recognizes symbols on license plates):
[property]
gpu-id=0
model-engine-file=../models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
labelfile-path=../models/LP/LPR/labels_us.txt
tlt-encoded-model=../models/LP/LPR/us_lprnet_baseline18_deployable.etlt
tlt-model-key=nvidia_tlt
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=3
gie-unique-id=3
output-blob-names=tf_op_layer_ArgMax;tf_op_layer_Max
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=../nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process-mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0
scaling-compute-hw = 1
input-object-min-width=32
input-object-min-height=16
[class-attrs-all]
threshold=0.5
- I’ve also changed the sample source code (deepstream_lpr_app.c) to match the original video resolution (1920x1080 instead of 1280x720):
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080
...
g_object_set (G_OBJECT (nvtile), "rows", tiler_rows, "columns",
tiler_columns, "width", 1920, "height", 1080, NULL);