To narrow down, under non-virtual environment, can you follow GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream to deploy your custom_lprnet.engine ?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| LPRNet can't use exported engine file | 18 | 2648 | December 28, 2021 | |
| Lprnet: Failed to run the tensorrt engine verification | 8 | 1313 | October 2, 2021 | |
| Can't load trt engine and throwing an instance of 'nvinfer1::MyelinError' | 17 | 2879 | October 12, 2021 | |
| Convert tensorrt engine from version 7 to 8 | 67 | 4731 | October 12, 2021 | |
| Falure to do inference | 9 | 1128 | January 11, 2022 | |
| LPRNet custom trained model ERROR: [TRT]: UffParser: Could not read buffer | 4 | 792 | October 12, 2021 | |
| Not Getting Correct output while running inference using TensorRT on LPRnet fp16 Model | 23 | 1650 | September 27, 2021 | |
| Cannot use TensorRT model exported by NVIDIA TAO | 8 | 1233 | May 17, 2022 | |
| [ERROR] Model has dynamic shape but no optimization profile specified. Aborted (core dumped) | 30 | 2216 | December 13, 2021 | |
| There was an error converting etlt to engine in LPR | 12 | 944 | January 6, 2022 |