Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
hello nvidia, i want to run deepstream-lpr-app, but it don’t work, help please!
./deepstream-lpr-app 2 2 0 c2.mp4 o.264
qtdemux pad video/x-h264
ERROR from element stream-muxer: NvStreamMux does not suppport raw buffers. Use nvvideoconvert before NvStreamMux to convert to NVMM buffers
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmultistream/gstnvstreammux.c(954): gst_nvstreammux_sink_event (): /GstPipeline:pipeline/GstNvStreamMux:stream-muxer
Returned, stopping playback
How can we reproduce your error?
i run it fllow this:
1 git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
2 cd deepstream_lpr_app/
3 ./download_ch.sh
4 i download tlt-converter from https://developer.nvidia.com/cuda102-trt71-jp44
5 ./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 models/LP/LPR/ch_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_ch_onnx_b16.engine
6 make
cd deepstream-lpr-app
cp dict_ch.txt dict.txt
./deepstream-lpr-app 2 2 0 c2.mp4 o.264
this is test output:
Request sink_0 pad from streammux
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Now playing: 2
Opening in BLOCKING MODE
0:00:06.521263721 28473 0x559fc21520 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 3]: deserialized trt engine from :/home/nx/Downloads/deepstream_lpr_app/models/LP/LPR/lpr_ch_onnx_b16.engine
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT image_input 3x48x96 min: 1x3x48x96 opt: 4x3x48x96 Max: 16x3x48x96
1 OUTPUT kINT32 tf_op_layer_ArgMax 24 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT tf_op_layer_Max 24 min: 0 opt: 0 Max: 0
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:06.521730115 28473 0x559fc21520 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 3]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:06.521818818 28473 0x559fc21520 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 3]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:06.521866178 28473 0x559fc21520 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 3]: Use deserialized engine model: /home/nx/Downloads/deepstream_lpr_app/models/LP/LPR/lpr_ch_onnx_b16.engine
0:00:06.533728240 28473 0x559fc21520 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine2> [UID 3]: Load new model:lpr_config_sgie_ch.txt sucessfully
0:00:06.534357768 28473 0x559fc21520 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 2]: Trying to create engine from model files
INFO: [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
WARNING: [TRT]: Missing dynamic range for tensor output_bbox/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing dynamic range for tensor output_cov/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing dynamic range for tensor output_cov/Sigmoid, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on DLA:
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on GPU:
INFO: [TRT]: conv1/convolution + activation_1/Relu6, block_1a_conv_1/convolution + block_1a_relu_1/Relu6, block_1a_conv_2/convolution, block_1a_conv_shortcut/convolution + add_1/add + block_1a_relu/Relu6, block_1b_conv_1/convolution + block_1b_relu_1/Relu6, block_1b_conv_2/convolution + add_2/add + block_1b_relu/Relu6, block_2a_conv_1/convolution + block_2a_relu_1/Relu6, block_2a_conv_2/convolution, block_2a_conv_shortcut/convolution + add_3/add + block_2a_relu/Relu6, block_2b_conv_1/convolution + block_2b_relu_1/Relu6, block_2b_conv_2/convolution + add_4/add + block_2b_relu/Relu6, block_3a_conv_1/convolution + block_3a_relu_1/Relu6, block_3a_conv_2/convolution, block_3a_conv_shortcut/convolution + add_5/add + block_3a_relu/Relu6, block_3b_conv_1/convolution + block_3b_relu_1/Relu6, block_3b_conv_2/convolution + add_6/add + block_3b_relu/Relu6, block_4a_conv_1/convolution + block_4a_relu_1/Relu6, block_4a_conv_2/convolution, block_4a_conv_shortcut/convolution + add_7/add + block_4a_relu/Relu6, block_4b_conv_1/convolution + block_4b_relu_1/Relu6, block_4b_conv_2/convolution + add_8/add + block_4b_relu/Relu6, output_cov/convolution, output_cov/Sigmoid, output_bbox/convolution,
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:40.575000988 28473 0x559fc21520 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 2]: serialize cuda engine to file: /home/nx/Downloads/deepstream_lpr_app/models/LP/LPD/ccpd_pruned.etlt_b16_gpu0_int8.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x480x640
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x30x40
2 OUTPUT kFLOAT output_cov/Sigmoid 1x30x40
0:00:40.642005735 28473 0x559fc21520 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine1> [UID 2]: Load new model:lpd_ccpd_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
!! [WARNING][NvDCF] Unknown param found: minMatchingScore4Motion
!! [WARNING][NvDCF] Unknown param found: matchingScoreWeight4Motion
[NvDCF] Initialized
0:00:42.312315390 28473 0x559fc21520 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/nx/Downloads/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60
0:00:42.312563003 28473 0x559fc21520 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/nx/Downloads/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine
0:00:42.318722384 28473 0x559fc21520 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:trafficamnet_config.txt sucessfully
Running...
qtdemux pad video/x-h264
ERROR from element stream-muxer: NvStreamMux does not suppport raw buffers. Use nvvideoconvert before NvStreamMux to convert to NVMM buffers
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmultistream/gstnvstreammux.c(954): gst_nvstreammux_sink_event (): /GstPipeline:pipeline/GstNvStreamMux:stream-muxer
Returned, stopping playback
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[NvDCF] De-initialized
Average fps 0.000233
Totally 0 plates are inferred
Deleting pipeline
I don't know if my steps are correct
1 Like
Your steps are correct. I can run with the steps you list.
both deepstream-test1 and deepstream-test2 run success, I don’t know why.
Now i test deepstream5.1 and run lpr again. thanks
it is work now, my video is 1080p, the test used 720p, thanks
On Jetson Nano with JetPack 4.5 the change that made things work was using the JetPack 4.4 version of the tlt-converter https://developer.nvidia.com/cuda102-trt71-jp44