LPR - TensorRT build not working on DS 6.0

Deepstream version: 6.0
Platform: dGPU

According to the License Plate Recognization sample the usage of tao-converter should not be required on DS 6.0:

I’m trying to load us_lprnet_baseline18_deployable.etlt directly but it results in the following error when building the TensorRT engine:

INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<model_inference3> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
ERROR: [TRT]: 10: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[lstm_W...Max_reduce_min]}.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1119 Build engine failed from config file

Is this model supported natively or not?

I can’t reproduce this issue with command below on DS6.0GA+dGPU

git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
cd deepstream_lpr_app/
./download_us.sh
cd deepstream-lpr-app/
cp dict_us.txt dict.txt
./deepstream-lpr-app 1 2 0 /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 output.264

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.