Nvinfer giving warning for Nvpreprocess adding for Yolov5

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU 4GB
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5
• NVIDIA GPU Driver Version (valid for GPU only) CUDA 12.2
• Issue Type( questions, new requirements, bugs)
This is a bug, When using deepstream-preprocess.py
I was using yolov5 model for inference with
config file as
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=models/yolo-obj.cfg
model-file=models/yolo-obj_best.weights
#model-engine-file=model_b5_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
#network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
input-tensor-from-meta=1
crop-objects-to-roi-boundary=1
symmetric-padding=1
force-implicit-batch-dim=1
workspace-size=1000
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
eps=0.2
group-threshold=1

and config_preprocess is

The values in the config file are overridden by values set through GObject

properties.

[property]
enable=1
target-unique-ids=1
# 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0

process-on-frame=1
# if enabled maintain the aspect ratio while scaling
maintain-aspect-ratio=1
# if enabled pad symmetrically with maintain-aspect-ratio enabled
symmetric-padding=1
# processing width/height at which image scaled
processing-width=608
processing-height=608
scaling-buf-pool-size=6
tensor-buf-pool-size=6
# tensor shape based on network-input-order
network-input-shape=12;3;608;608
# 0=RGB, 1=BGR, 2=GRAY

network-color-format=0
# 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
tensor-name=input_1
# 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
scaling-pool-memory-type=0
# 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
scaling-pool-compute-hw=0
# Scaling Interpolation method
# 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
# 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
# 6=NvBufSurfTransformInter_Default
scaling-filter=0
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
pixel-normalization-factor=0.003921568
#mean-file=
#offsets=

[group-0]
src-ids=0;
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
draw-roi=1
roi-params-src-0=537;179;554;780;537;179;554;780;537;179;554;780;
This my TRT Layers
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x608x608
1 OUTPUT kFLOAT boxes 22743x4
2 OUTPUT kFLOAT scores 22743x1
3 OUTPUT kFLOAT classes 22743x1

I get this warning
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1
(1): gstnvinfer.cpp(1972): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-inference
0:01:48.700752983 3839 0x2c6c980 WARN nvinfer gstnvinfer.cpp:1972:gst_nvinfer_process_tensor_input: warning: nvinfer could not find input layer with name = input_1
But there is no inference done. Could you help me on where i am going wrong

please set “tensor-name” to yolov5’s input layer name.

Actual I changed to input its working now, even 3x608x608 also mattered while configuring config_nvdspreprocess.txt

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.