VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1.6

I am running the deepstream Python application to LPR.But I got some problems and i realy need your help!

This is my log:

Creating Pipeline 
 
Creating nvstreammux 

Creating source bin
source-bin-00
Creating Source 
 
Creating flvdemux 
 
Creating H264Parser 

Creating nvv4l2decoder Decoder 

Playing file rtmp://47.113.106.45/live/mult1 
Creating Sink 

Adding elements to Pipeline 

Creating nvdsosd 

Creating yolo_pgie 

Creating lpr_pgie and lpr_sgie1 and lpr_sgie2 

Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.

(python:22824): GStreamer-WARNING **: 15:23:01.306: Name 'Stream-muxer' is not unique in bin 'pipeline0', not adding
Linking elements only LRP 

Starting pipeline 

Opening in BLOCKING MODE 
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:18.727436753 22824   0x55c23c3ef0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<lprsecondary2-nvinference-engine> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 6]: deserialized trt engine from :/home/nephilim/5gai/identify/deepstream_deploy/weights/lp/ch_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 16x3x48x96      Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

ERROR: [TRT]: 3: Cannot find binding of given name: output_bbox/BiasAdd
0:00:18.727706258 22824   0x55c23c3ef0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<lprsecondary2-nvinference-engine> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 6]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: 3: Cannot find binding of given name: output_cov/Sigmoid
0:00:18.727807570 22824   0x55c23c3ef0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<lprsecondary2-nvinference-engine> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 6]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:18.727871955 22824   0x55c23c3ef0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<lprsecondary2-nvinference-engine> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 6]: Use deserialized engine model: /home/nephilim/5gai/identify/deepstream_deploy/weights/lp/ch_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
0:00:18.901146527 22824   0x55c23c3ef0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<lprsecondary2-nvinference-engine> [UID 6]: Load new model:/home/nephilim/5gai/identify/deepstream_deploy/cfg/lpr_config/lpr_config_sgie_ch.txt sucessfully
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:18.941394886 22824   0x55c23c3ef0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<lpr-secondary1-nvinference-engine> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 5]: deserialized trt engine from :/home/nephilim/5gai/identify/deepstream_deploy/weights/lp/ccpd_pruned.etlt_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x1168x720      
1   OUTPUT kFLOAT output_bbox/BiasAdd 4x73x45         
2   OUTPUT kFLOAT output_cov/Sigmoid 1x73x45         

0:00:18.941600486 22824   0x55c23c3ef0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<lpr-secondary1-nvinference-engine> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 5]: Use deserialized engine model: /home/nephilim/5gai/identify/deepstream_deploy/weights/lp/ccpd_pruned.etlt_b16_gpu0_int8.engine
0:00:19.041442243 22824   0x55c23c3ef0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<lpr-secondary1-nvinference-engine> [UID 5]: Load new model:/home/nephilim/5gai/identify/deepstream_deploy/cfg/lpr_config/lpd_ccpd_config_yolocls.txt sucessfully
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
Deserialize yoloLayer plugin: yolo_122
Deserialize yoloLayer plugin: yolo_125
Deserialize yoloLayer plugin: yolo_128
0:00:19.489571361 22824   0x55c23c3ef0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<yolo-primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/nephilim/5gai/identify/deepstream_deploy/weights/det/yolov5m_b8.engine
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT data            3x640x640       
1   OUTPUT kFLOAT yolo_122        255x80x80       
2   OUTPUT kFLOAT yolo_125        255x40x40       
3   OUTPUT kFLOAT yolo_128        255x20x20       

0:00:19.489807426 22824   0x55c23c3ef0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<yolo-primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/nephilim/5gai/identify/deepstream_deploy/weights/det/yolov5m_b8.engine
0:00:19.589143869 22824   0x55c23c3ef0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<yolo-primary-inference> [UID 1]: Load new model:/home/nephilim/5gai/identify/deepstream_deploy/cfg/person_det_tracker_config/yolov5_config.txt sucessfully
In flvdemux_pad_add

NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
In flvdemux_pad_add

H264: Profile = 66, Level = 40 
NVMEDIA_ENC: bBlitMode is set to TRUE 
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:3752: => VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation
0:01:08.022794232 22824   0x55c23b9320 WARN                 nvinfer gstnvinfer.cpp:1376:convert_batch_and_push_to_input_thread:<lpr-secondary1-nvinference-engine> error: NvBufSurfTransform failed with error -2 while converting buffer
Error: gst-stream-error-quark: NvBufSurfTransform failed with error -2 while converting buffer (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1376): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline0/GstNvInfer:lpr-secondary1-nvinference-engine
0:01:08.058934191 22824   0x55c1da5d40 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<yolo-primary-inference> error: Internal data stream error.
0:01:08.059032495 22824   0x55c1da5d40 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<yolo-primary-inference> error: streaming stopped, reason error (-5)

And those are my config files.
lpd_ccpd_config_yolocls.txt (1.2 KB)
lpr_config_sgie_ch.txt (1.0 KB)

Could someone help me please?

This is limitation on Jetson.

and the issue reproduced, we are checking it.

What is your model’s input dimensions?

I think the problem is the lpd model.This model’s input is 3x1168x720.I change input-object-min-height and input-object-min-width to 100, and it works.

0:00:21.059681465   825   0x556616f4f0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<lpr-secondary1-nvinference-engine> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 5]: deserialized trt engine from :/home/nephilim/5gai/identify/deepstream_deploy/weights/lp/ccpd_pruned.etlt_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x1168x720      
1   OUTPUT kFLOAT output_bbox/BiasAdd 4x73x45         
2   OUTPUT kFLOAT output_cov/Sigmoid 1x73x45       

Why 73 and 45 can’t work?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.