As follow up of the long thread here Tao toolkit observations - #63 by foreverneilyoung
I was following the notebook ocrnet/ocrnet-vit.ipynb in order to train OCRNet for German number plate recognition.
I first ran the notebook “as is” to see, what it gives. In the end I got this:
I was using this ONNX model as replacement for my original LPR ONNX, trained this morning from lprnet/lprnet.ipynb with the following configuration:
[property]
gpu-id=0
# This model works. Trained from LPRNet
#onnx-file=models/LP/LPR/lprnet_epoch-024.onnx
onnx-file=models/LP/LPR/best_accuracy.onnx
labelfile-path=models/LP/LPR/labels_us.txt
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
gie-unique-id=3
# This line is causing problems
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=nvinfer/libnvdsinfer_custom_impl_lpr.so
process-mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0
[class-attrs-all]
threshold=0.5
But all I got was this:
0:00:02.233559654 24474 0x7719e4009c70 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<sgie2-lpr> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 3]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
// and lot more of these warnings
Hi @foreverneilyoung ,
As we synced in Tao toolkit observations - #61 by Morganh, you are using https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app/blob/master/deepstream-lpr-app/lpr_config_pgie.txt but obviously there is problem in output-blob-names.
I move this topic to deepstream forum to check further.
@Fiona.Chen I already consulted netron, it gave an image, which was too big for PNG export. SVG export even too big to be attachable here. Top of it looks like so:
DeepStream does not care about the details of the network, only the input and output are meaningful for DeepStream. Please click the top layer of the network, the input and output layers’ info will appear in right side.
Since the model is trained by you, please confirm that the model accepts the gray image with 200x64 resolution as input and the input tensor dimension is NCHW.
Please set “model-color-format=2” in the nvinfer configuration file for the “the gray image with 200x64 resolution as input and the input tensor dimension is NCHW” ONNX model.
Unfortunately this model is unable to detect “spaces” or other delimiters (which are important for number plates in other parts of the world, other than China or USA)
I tried to train LPRNet with a characterset containing spaces and minus sign to no avail
I got a hint to try OCRNet. So I ran the a.m. notebook for OCRNet training unchanged and got that “best_accuracy.onnx” network
I just replaced the LPR network by the OCR network and this doesn’t work (just commenting the LPR model lines above and uncommenting the others)
It fails with the a.m. error regarding colour channels. Setting colour mode to GRAY is accepted initially, but crashes at runtime