• GPU
*• DeepStream 7.0
• CUDA 12.6
• detection bug
• Try a deepstream pipeline with LPDnet model pruned_v2.1
Using yolov4_tiny_usa_deployable for pruned_v2.1 (LPDNet | NVIDIA NGC) on my deepstream pipeline, it detected absolutely nothing, 0 licence plate.
Here my pipeline
pipeline:
- v4l2src:
device: /dev/video0
- capsfilter:
caps: "image/jpeg, width=1920, height=1080, framerate=30/1"
- jpegdec: {}
- videoconvert: {}
- nvvideoconvert: {}
- capsfilter:
caps: "video/x-raw(memory:NVMM), format=RGBA, width=1920, height=1080"
- mux.sink_0:
nvstreammux:
name: mux
batch-size: 1
width: 1920
height: 1080
batched-push-timeout: 4000000
live-source: 1
num-surfaces-per-frame: 1
sync-inputs: 0
max-latency: 0
- nvinfer:
config-file-path: ../infer_cfg/config_infer_secondary_lpdnet_YOLO.txt
- nvtracker:
tracker-width: 640
tracker-height: 384
gpu-id: 0
ll-lib-file: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file: ../infer_cfg/config_tracker_NvDCF_perf.yml
- nvdsanalytics:
name: "analytics"
config-file: ../infer_cfg/analytics.txt
- nvvideoconvert: {}
- nvdsosd:
name: onscreendisplay
- fpsdisplaysink:
name: fps-display
video-sink: nveglglessink
text-overlay: false
sync: false
Here my config file for the model
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=../../models/LP/LPD/usa_lpd_label.txt
tlt-encoded-model=../../models/LP/LPD/yolov4_tiny_usa_deployable.etlt
int8-calib-file=../../models/LP/LPD/yolov4_tiny_usa_cal.bin
tlt-model-key=nvidia_tlt
infer-dims=3;480;640 # Corrected from `uff-input-dims`
batch-size=16
network-mode=1 # INT8 mode (ensure your hardware supports it)
num-detected-classes=1
process-mode=2 # Secondary inference
interval=0
gie-unique-id=2
network-type=0 # Detector type
operate-on-gie-id=1
operate-on-class-ids=0
cluster-mode=3 # NMS clustering mode
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../../deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so
#enable-dla=1 # Uncomment if using Deep Learning Accelerator (DLA)
[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
here my output/warnings before launch :
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:02.101643023 419733 0x57888284f920 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: builtin_op_importers.cpp:5221: Attribute caffeSemantics not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 125) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 207) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 208) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 210) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 217) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 218) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 224) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 225) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 289) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 290) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 293) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 294) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 298) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 299) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 555) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 556) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 559) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 560) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor BatchedNMS, expect fall back to non-int8 implementation for any layer consuming or producing given tensor