Hi, currently I am working on License plate recognition app(GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream). License plate detection(sgie) is working fine but I am not able to the label(ocr text) from LPR model as l_class is always None from l_class = obj_meta.classifier_meta_list. Can anyone please help me on this?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi, I am using GPU platform, DS 5.1. I have converted LPR model using tlt-converter(./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 us_lprnet_baseline18_deployable.etlt -t fp16 -e lpr_us_onnx_b16.engine). Once I run lpr app facing following errors:
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:04.895800298 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 2]: Could not find output layer ‘output_bbox/BiasAdd’ in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:04.895898249 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 2]: Could not find output layer ‘output_cov/Sigmoid’ in engine
Hi, I am unable to find the solution for above issue, so i have moved to Deepstream 6.0. I have followed github lpr project steps. Currently I am facing following issue:
**Starting pipeline **
0:00:00.209260524 226 0x1a87b20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: ShapedWeights.cpp:173: Weights td_dense/kernel:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
python3: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vectormyelin::ir::tactic_attribute_t&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && “Invalid size written”’ failed.
Aborted (core dumped)
Could you please help me.
Can you run it step by step based on the guide in below.
GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream
Yeah, I have followed the same steps with Deepstream 6.0. Still facing the same issue.
Is it possible to do this project in python? I want to build something similar but in python.
Can you submit one new topic for your question? Thanks.