dGPU
DS 7
While doing transfer learning following this notebook tao_tutorials/notebooks/tao_launcher_starter_kit/lprnet at main · NVIDIA/tao_tutorials · GitHub I came across two small problem so far while running the newly trained ONNX model using this configuration deepstream_lpr_app/deepstream-lpr-app/lpr_config_pgie.txt at master · NVIDIA-AI-IOT/deepstream_lpr_app · GitHub
- There is an obviously stale setting
num-detected-classes=45
- This configuration causes a runtime error
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
ERROR: [TRT]: 3: Cannot find binding of given name: output_bbox/BiasAdd
0:00:09.626027892 13488 0x1e2b760 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<sgie2-lpr> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2062> [UID = 3]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: 3: Cannot find binding of given name: output_cov/Sigmoid
0:00:09.626062854 13488 0x1e2b760 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<sgie2-lpr> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2062> [UID = 3]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:09.627751012 13488 0x1e2b760 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<sgie2-lpr> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 3]: Use deserialized engine model: /home/ubuntu/vx-ai-golang/models/LP/LPR/lprnet_epoch-024.onnx_b16_gpu0_fp16.engine
- Then I would like to have a little explanation for this: I see the configuration is for pgie, while I was always using the config for sgie up to now. Is that somehow an important difference? And if - what is the difference at all? I never got the magic behind pgie and sgie. I always thought, if an inference is depending on another it is a secondary. So the LPD depends on LPR, I would say, secondary is OK. What is behind that LPD PGIE config?