When I use nvproprecess with yolov7, the gpu run 100%

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)Jetson
**• DeepStream Version 6.2
**• JetPack Version (valid for Jetson only) 5.1.1
**• TensorRT Version 8.5
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)bugs
here is my config:

[property]
enable=1

# list of component gie-id for which tensor is prepared

target-unique-ids=1

# 0=NCHW, 1=NHWC, 2=CUSTOM

network-input-order=0

# 0=process on objects 1=process on frames

process-on-frame=1

#uniquely identify the metadata generated by this element

unique-id=5

# gpu-id to be used

gpu-id=0

# if enabled maintain the aspect ratio while scaling

maintain-aspect-ratio=1

# if enabled pad symmetrically with maintain-aspect-ratio enabled

symmetric-padding=1

# processig width/height at which image scaled

processing-width=640
processing-height=640

# max buffer in scaling buffer pool

scaling-buf-pool-size=6

# max buffer in tensor buffer pool

tensor-buf-pool-size=6

# tensor shape based on network-input-order

network-input-shape= 4;3;640;640

# 0=RGB, 1=BGR, 2=GRAY

network-color-format=1

# 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16

tensor-data-type=0

# tensor name same as input layer name

tensor-name=images

# 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED

scaling-pool-memory-type=0

# 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC

scaling-pool-compute-hw=0

# Scaling Interpolation method
# 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
# 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
# 6=NvBufSurfTransformInter_Default

scaling-filter=1

# custom library .so path having custom functionality

custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so

# custom tensor preparation function name having predefined input/outputs
# check the default custom library nvdspreprocess_lib for more info

custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]

Below parameters get used when using default custom library nvdspreprocess_lib

network scaling factor

pixel-normalization-factor=0.003921568

mean file path in ppm format

#mean-file=

array of offsets for each channel

#offsets=

[group-0]
src-ids=0
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
roi-params-src-0=150;480;400;300

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=1
model-engine-file=engines/yolov7-fire-smoke-640-b4-fp16.engine

labelfile-path=labels/tx_fire_smoke.txt

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=2
gie-unique-id=1
network-type=0
interval=20

is-classifier=0

1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2
maintain-aspect-ratio=1

DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

parse-bbox-func-name=NvDsInferParseCustomEfficientNMS
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.2/sources/libs/nvdsinfer_customparser/libnvds_infercustomparser.so

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.15

I use only one rtsp stream, and one stream for nvinfer, but the gpu use is full, always 100%

This means that GPU usage rate is high. How does this affect your project?

Is this normal? When I use this model without nvpreprocess, the GPU usage rate is low 50%.
I think the GPU usage rate always keep at 100% is not normal.

It depends on the algorithm used in your preprocess plugin. Could you reproduce that with our demo or attach your project to us? You can message to me by click on my icon.

I find the reason.
It seems that if I use nvpreproecess in my pipline, the param interval in nvinfer will not work. So the GPU usage rate always 100%.
I must use interval to make GPU usage low.

Currently, we do not support the interval scenario when nvinfer infers the data from the preprocess. Since the nvinfer is open source sources\gst-plugins\gst-nvinfer\gstnvinfer.cpp, can you try the following method and check if that meets your needs?

--- a/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp
+++ b/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp
@@ -1952,6 +1952,15 @@ static GstFlowReturn
 gst_nvinfer_process_tensor_input (GstNvInfer * nvinfer, GstBuffer * inbuf,
     NvBufSurface * in_surf)
 {
+  gboolean skip_batch;
+
+  /* Process batch only when interval_counter is 0. */
+  skip_batch = (nvinfer->interval_counter++ % (nvinfer->interval + 1) > 0);
+
+  if (skip_batch) {
+    return GST_FLOW_OK;
+  }

ok I will try it.
Do you have a plan to fix it in future versions deepstream?

Yes. We will discuss and add this feature later. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.