VIC Configuration failed image scale factor exceeds 16

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) Jetson AGX
**• DeepStream Version5.1
**• JetPack Version (valid for Jetson only)4.5.1
**• TensorRT Version7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)

I am running the deepstream Python application to perform vehicle and license plate detection and license plate color classification. When color classification is performed, the following errors sometimes occur. I think it should be caused by the input image size, but I set input-object-min-width = 0 ,input-object-min-height = 0. Still wrong

pgie-config (yolov4-coco-80classes, input:3416416):

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=/home/nvidia/FIRST_TO_DO/yolov4tinyfp16.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_yolov4/labels.txt
num-detected-classes=80
batch-size=6
interval=0
force-implicit-batch-dim=1
process-mode=1
#0=RGB, 1=BGR
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2

gie-unique-id=1

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2

maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV4
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_yolov4/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

[class-attrs-all]
#pre-cluster-threshold=0.2
#eps=0.2
#group-threshold=1

nms-iou-threshold=0.4
pre-cluster-threshold=0.4

[tracker]
tracker-width=640
tracker-height=384
gpu-id=0
#ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
ll-config-file=tracker_config.yml
#enable-past-frame=1
enable-batch-process=1

sgie1-config (yolov4-plate detection, input:3416416):
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#net-scale-factor=1
model-engine-file=/home/nvidia/FIRST_TO_DO/pldf16.engine
labelfile-path=./pld.txt
num-detected-classes=1
batch-size=6
interval=0
force-implicit-batch-dim=1
##1 Primary 2 Secondary
process-mode=2
#0=RGB, 1=BGR
model-color-format=0
input-object-min-width=0
input-object-min-height=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=2 #;5;7

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2

maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV4
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_yolov4/nvdsinfer_custom_impl_Yolo_pld/libnvdsinfer_custom_impl_Yolo.so

[class-attrs-all]
#pre-cluster-threshold=0.2
#eps=0.2
#group-threshold=1

nms-iou-threshold=0.4
pre-cluster-threshold=0.6
#detected-min-w=40
#detected-min-h=40
#detected-max-w=1920
#detected-max-h=1920

sgie2-config (plate-colour-classifier;I train on kears myself input:3934):

[property]
gpu-id=0
net-scale-factor=0.0039215686274
mean-file=/home/nvidia/FIRST_TO_DO/pltcolor.ppm
#0=RGB, 1=BGR
model-color-format=1
labelfile-path=./color.txt

model-engine-file=/home/nvidia/FIRST_TO_DO/pltcolor_nchw.engine
batch-size=1
input-object-min-width=0
input-object-min-height=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2

##1 Primary 2 Secondary
process-mode=2
interval=0

current gie id

gie-unique-id=3

#0 detector 1 classifier 2 segmentation 3 instance segmentation
network-type=1
#is-classifier=1
classifier-async-mode=0

operate-on-gie-id=2
operate-on-class-ids=0
classifier-threshold=0.3
output-blob-names=activation_19

Now playing…
1 : rtsp://admin:abc12345@192.168.10.88:554/h264/ch33/main/av_streamm
2 : rtsp://admin:abc12345@192.168.10.88:554/h264/ch33/main/av_streamm
3 : rtsp://admin:abc12345@192.168.10.88:554/h264/ch33/main/av_streamm
4 : rtsp://admin:abc12345@192.168.10.88:554/h264/ch33/main/av_streamm
5 : rtsp://admin:abc12345@192.168.10.88:554/h264/ch33/main/av_streamm
6 : rtsp://admin:abc12345@192.168.10.88:554/h264/ch33/main/av_streamm
Starting pipeline

Using winsys: x11
0:00:03.233947232 31943 0x2ed2690 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 3]: deserialized trt engine from :/home/nvidia/FIRST_TO_DO/pltcolor_nchw.engine
0:00:03.234059424 31943 0x2ed2690 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 3]: Use deserialized engine model: /home/nvidia/FIRST_TO_DO/pltcolor_nchw.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT conv2d_11_input 3x9x34
1 OUTPUT kFLOAT activation_19 5

0:00:03.347550240 31943 0x2ed2690 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 3]: Load new model:secondary-pltcolor-config.txt sucessfully
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x416x416 min: 1x3x416x416 opt: 6x3x416x416 Max: 6x3x416x416
1 OUTPUT kFLOAT boxes 2535x1x4 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT confs 2535x1 min: 0 opt: 0 Max: 0

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
0:00:03.436612096 31943 0x2ed2690 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 2]: deserialized trt engine from :/home/nvidia/FIRST_TO_DO/pldf16.engine
0:00:03.436958592 31943 0x2ed2690 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 2]: Use deserialized engine model: /home/nvidia/FIRST_TO_DO/pldf16.engine
0:00:03.439897088 31943 0x2ed2690 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 2]: Load new model:secondary-lpd-config.txt sucessfully
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvDCF][Warning] minTrackingConfidenceDuringInactive is deprecated
[NvDCF] Initialized
0:00:04.574794368 31943 0x2ed2690 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/nvidia/FIRST_TO_DO/yolov4tinyfp16.engine
0:00:04.574939968 31943 0x2ed2690 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/nvidia/FIRST_TO_DO/yolov4tinyfp16.engine
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x416x416 min: 1x3x416x416 opt: 6x3x416x416 Max: 6x3x416x416
1 OUTPUT kFLOAT boxes 2535x1x4 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT confs 2535x80 min: 0 opt: 0 Max: 0

0:00:04.578871392 31943 0x2ed2690 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:primary-inference-config.txt sucessfully
Decodebin child added: source

Decodebin child added: source

Decodebin child added: source

Decodebin child added: source

Decodebin child added: source

Decodebin child added: source

Decodebin child added: decodebin0
Decodebin child added: decodebin1

Decodebin child added: rtph264depay0
Decodebin child added: rtph264depay1

Decodebin child added: h264parse0
Decodebin child added: h264parse1

Decodebin child added: capsfilter0

Decodebin child added: capsfilter1

Decodebin child added: nvv4l2decoder0
Decodebin child added: nvv4l2decoder1

Opening in BLOCKING MODE
Opening in BLOCKING MODE
Decodebin child added: decodebin3
Decodebin child added: decodebin2
Opening in BLOCKING MODE
Opening in BLOCKING MODE

NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
Decodebin child added: rtph264depay2

Decodebin child added: rtph264depay3

Decodebin child added: h264parse2

Decodebin child added: h264parse3

NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
Decodebin child added: capsfilter2

Decodebin child added: capsfilter3

Decodebin child added: nvv4l2decoder3
Decodebin child added: decodebin4

Decodebin child added: nvv4l2decoder2

Opening in BLOCKING MODE
Decodebin child added: rtph264depay4

Decodebin child added: h264parse4
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
Opening in BLOCKING MODE

NvMMLiteBlockCreate : Block : BlockType = 261
Decodebin child added: capsfilter4
Decodebin child added: decodebin5

Opening in BLOCKING MODE

NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
Decodebin child added: rtph264depay5

NvMMLiteBlockCreate : Block : BlockType = 261
Decodebin child added: h264parse5

Decodebin child added: capsfilter5

Decodebin child added: nvv4l2decoder4

Decodebin child added: nvv4l2decoder5

In cb_newpad

Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
Opening in BLOCKING MODE
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f2c72f2e8 (GstCapsFeatures at 0x7e5c0870a0)>
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f2c72f408 (GstCapsFeatures at 0x7e540c5900)>
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f2c72f288 (GstCapsFeatures at 0x7e40015a20)>
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f2c72f2e8 (GstCapsFeatures at 0x7e3806aa80)>
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f2c72f288 (GstCapsFeatures at 0x7e300748a0)>
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f2c72f2e8 (GstCapsFeatures at 0x7e1c08ce00)>

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:3391: => VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation
0:00:19.421499648 31943 0x28ca6d0 WARN Exiting app
nvinfer gstnvinfer.cpp:1277:convert_batch_and_push_to_input_thread:
error: NvBufSurfTransform failed with error -2 while converting buffer
0:00:19.421635040 31943 0x28ca6d0 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:19.421653184 31943 0x28ca6d0 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Error: gst-stream-error-quark: NvBufSurfTransform failed with error -2 while converting buffer (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1277): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline0/GstNvInfer:secondary2-color
0:00:19.461713312 31943 0x28ca6d0 WARN nvinfer gstnvinfer.cpp:1277:convert_batch_and_push_to_input_thread: error: NvBufSurfTransform failed with error -3 while converting buffer

Yes. For Jetson, there is scaling factor limitation, only support 1/16 to 16 scaling.