Facing an Issue while using Deepstream LPR app

• Hardware Platform : GPU (Tesla T4)
• DeepStream Version: 6.0
• TensorRT Version: 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only): 470

Facing an Issue while using LPR app
Deepstream app failed while building engine file for sgie2 : us_lprnet_baseline18_deployable.etlt

0:00:01.256299095 30248 0x3a1bca0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 3]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed
0:00:01.256360773 30248 0x3a1bca0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 3]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed, try rebuild
0:00:01.256391192 30248 0x3a1bca0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: ShapedWeights.cpp:173: Weights td_dense/kernel:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
python3: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vectormyelin::ir::tactic_attribute_t&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed.
Aborted (core dumped)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

1 Like

I can not reproduce this error in my T4 with DS6.0. Please check your set up.

Please make sure deepstream-test1 can work before you start any new samples.

1 Like

@Fiona.Chen
https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app/issues/15#issue-1084437420

The issue an not be reproduced. Please check your enviroment.

Hey I forgot to mention, we used LPR app into python.

This issue can be recreated using this python app.

python3 deepstream_lpr_app.py 1 2 0 file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4 output.mp4

root@c79ec6aed68e:/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-lpr-pyt
hon-version# python3 deepstream_lpr_app.py 1 2 0 file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4 output.mp4
['deepstream_lpr_app.py', '1', '2', '0', 'file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4', 'output.mp4']
1
Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
file://file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4
*********************
Creating source bin
source-bin-00
Creating Pgie 
 
Creating tiler 
 
Creating nvdsanalytics 
 
Creating nvvidconv1 
 
Creating filter1 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating FakeSink 

Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
4 :  file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4
Starting pipeline 

0:00:00.802258261 17048      0x362f8a0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: ShapedWeights.cpp:173: Weights td_dense/kernel:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
python3: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vector<myelin::ir::tactic_attribute_t>&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed.
Aborted (core dumped)

So you need to debug your own code.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.